I know I have been asking a lot of questions lately and I want to say that I am thankful for all the great advice I have gotten from this community.

With that said, here is my latest question...

Is there a faster backup option other than DAR or something that can be modified to increase DAR's speed?
* I have already tested setting compression to 0 and it does not increase speed
** I have read
http://wiki.contribs.org/Backup_with_dar#NFS and
http://wiki.contribs.org/SME_Server:Documentation:Administration_Manual:Chapter10#Backup_or_restoreHere is the scenario:
SME ServerRunning as a VM Instance on ESXi 5.5 Update 3
2 CPUs allocated
8 GB Ram allocated
2 - 1 GB NICs (Running in FailOver)
2 - 10 GB NICs (Running in FailOver)
6 TB Hard Drive allocated (Is via a ESXi DataStore which uses a PERC H700 running in RAID 6)
All Current Updates Applied
FreeNas ServerChassis: Dell PowerEdge C2100 Server
Build: FreeNAS-9.3-STABLE-201511040813
CPUs: 2 - Intel Xeon E5506 SLBF8 2.13GHz/4MB/4.8GTs/Quad Core LGA1366 CPU
Memory: 24 GB (6x4GB - 2Rx4 PC3-10600R - DDR3 - ECC)
HBA: Dell H200 Mezzanine - Flashed to LSI 9211-8i IT Mode
Network (1 GB): 2 - Intel 82576 GB Ethernet
Running in "Fail-Over" LAGG
Used just for Production Access
Network (10 GB): Chelsio 110-1088-30 10GB 2-Port PCI-e HBA Adapter Card with SFP's
Running in "Fail-Over" LAGG
Used just for Backup Routine Access
Is Direct Connected to ESXi Server w/Static IP Assigned (No Switch Involved)
OS Hard Disk(s): 2 - 120 GB SSD (were a good deal, so I grabbed them...)
Storage Hard Disk(s): 12 - Hitachi Ultrastar HUA723030ALA640 3TB
Enterprise Rated 7200RPM 64MB SATAIII (6Gb/s) 3.5"
Volume: Composed of 2 RAIDZ2 w/6 Disks in each one
Now, with everything up and running I did some preliminary tests to get an idea of speeds:
iPerf from SME Server to FreeNas Server (Got 8+ GB/Sec, so I was happy about that):Command:
iperf -p 5001 -c 172.20.1.6 -w 512kResults:
------------------------------------------------------------
Client connecting to 172.20.1.6, TCP port 5001
TCP window size: 244 KByte (WARNING: requested 512 KByte)
------------------------------------------------------------
[ 3] local 172.20.1.5 port 41214 connected with 172.20.1.6 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 9.47 GBytes 8.13 Gbits/sec
DD (Write test) on FreeNas (64 Gb File - Got ~805 MB/Sec, happy about that too)Command:
dd if=/dev/zero of=tmp.bin bs=1m count=64k && sync*** Have a screenshot, can insert if needed. Otherwise you will have to take my word for it...
Now onto the backup...
Initial "Full Backup" was performed (all via the 10 GB Pipe) and I was surprised by how long it took:
Report from SME Server (Looks like just below 14 hrs):
==================================
DAILY BACKUP TO WORKSTATION REPORT
==================================
Backup of [name removed] started at Wed Nov 11 03:13:04 2015 Destination //172.20.1.6/SMEBackupsCIFs/[name removed]/set1
No existing reference backup, will make full backup Basename full-20151111031304 Starting the backup with a timeout of 24 hours
--------------------------------------------
354816 inode(s) saved
including 0 hard link(s) treated
0 inode(s) changed at the moment of the backup and could not be saved properly
0 byte(s) have been wasted in the archive to resave changing files
0 inode(s) not saved (no inode/file change)
0 inode(s) failed to be saved (filesystem error)
293 inode(s) ignored (excluded by filters)
0 inode(s) recorded as deleted from reference backup
--------------------------------------------
Total number of inode(s) considered: 355109
--------------------------------------------
EA saved for 0 inode(s)
--------------------------------------------
Destination disk usage 473G, 3% full, 16T available Backup successfully terminated at Wed Nov 11 14:56:31 2015
What has me puzzled is say even if I was only getting 200 MB/Sec, then I would think that the "Full Backup" routine (which is ~500 GB) would be:
200 MB/Sec * 60 Seconds = 12,000 MB/Minute
12,000 MB/Minute * 60 Minutes = 720,000 MB/Hour
720,000 MB / 1024 = 703 GB/Hour
So with 500 GB, I could safely presume < 1 hour. Heck even if it was 2 hours, that would be worlds better than the 13+ that is currently showed...
Looking at "Incremental Backups", they seem to be taking about ~ 13 minutes when there is only 2 - 3 GB of data.
While this is all faster than what used to be there (SME to Drobo via 1 GB), it is not exactly on par for the hardware assigned to it.
I understand that I could try out NFS instead of CIFs (and still may), but even if CIFs is single threaded I fail to see why the backups are so long? I also have a similar system on loan where their SME Server is backing up to a similar FreeNas Server, but only over a 1 GB NIC. Their initial "Full Backup" was of 351 GB and took ~ 12 hours, so my concept of using a 10 GB Pipe
Is this basically a limitation of DAR or am I missing something?
As always thanks for any input