Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: girkers on June 09, 2010, 05:01:19 PM
-
My backup of about 550GB is taking over 24hours to backup but I don't think it should take that long, should it?
The results of read testing are:
[root@caine log]# hdparm -tT /dev/sde1
/dev/sde1:
Timing cached reads: 4384 MB in 2.00 seconds = 2192.33 MB/sec
Timing buffered disk reads: 92 MB in 3.01 seconds = 30.52 MB/sec
But obviously this is only the read speed and I couldn't work out a way to test the write speed. Could any advice as to how I may speed up my backup or am I destined to have a ssssslllllloooooowwwww backup.
Thanks
Girkers
-
Oh... such temptation, a post like this from someone with 200+
postings logged quoting a computer services dot com ...ohhh;~)
Backups (type unknown, method unknown, extent unknown)
take many resources particularly if compression (extent unknown)
is utilised. Explain why do you think USB stuff is or should be quicker?
...how I may speed up my backup...
Use a faster drive/method/resource/mechanism?
Why should you need ...to test the write speed... when
your backups are in real life ie not under test conditions.
Try monitoring your SME (version unknown) with HTOP
and see how resources are being used up by whatever.
-
The results of read testing are:
Did it ever occur to you that reading might be different to writing?
A little bit more details on your setup like piran already suggested would do no harm in us answering your questions.
-
piran,
I take your point, hopefully your tongue was firmly stuck in your cheek when you typed it :-P
Details,
I am using a standard SME 7.5 Server + Zarafa, backup is set to an external USB drive and to do a full backup every evening with no time out for full backups. What is happening the backup is set to start at 10pm each evening and once one starts it is not completing before the next one is due to start at 10pm the next evening.
I recently replaced my backup drives (USB) for larger ones due to running out of space on my previous drives and this is where the problem started. I attempted to do write speed testing of the USB drive using dd however I could never get it to work.
Here is what is contained in /var/log/messages when I plug in the drive:
Jun 14 17:22:23 caine kernel: usb 1-7: new high speed USB device using address 7
Jun 14 17:22:23 caine kernel: scsi7 : SCSI emulation for USB Mass Storage devices
Jun 14 17:22:23 caine kernel: Vendor: WD Model: 20EADS External Rev: 1.75
Jun 14 17:22:23 caine kernel: Type: Direct-Access ANSI SCSI revision: 02
Jun 14 17:22:23 caine kernel: SCSI device sde: 3907029168 512-byte hdwr sectors (2000399 MB)
Jun 14 17:22:23 caine kernel: sde: assuming drive cache: write through
Jun 14 17:22:23 caine kernel: SCSI device sde: 3907029168 512-byte hdwr sectors (2000399 MB)
Jun 14 17:22:23 caine kernel: sde: assuming drive cache: write through
Jun 14 17:22:23 caine kernel: sde: sde1
Jun 14 17:22:23 caine kernel: Attached scsi disk sde at scsi7, channel 0, id 0, lun 0
Jun 14 17:22:23 caine kernel: USB Mass Storage device found at 7
Jun 14 17:22:23 caine scsi.agent[5186]: disk at /devices/pci0000:00/0000:00:1d.7/usb1/1-7/1-7:1.0/host7/target7:0:0/7:0:0:0
Here is what I would expect to see time wise, showing my work:
Data: approx 575GB = (575 x 1024 for gigabyte to megabyte conversion) 588800MB
Estimated write speed: 25MB/s
Raw backup time: (Data / Write Speed) 23552 seconds or (/3600 to convert to hours) 6.54 hours
Obviously there would be verification time however I still can't see how it could take over 24hours to do my backup.
So with all that could anyone help in working out why my backup is taking so long.
Thanks
-
It is funny how it only took 1 and 2 hours for people to critise my post, yet a week goes by and no one can help me. I wouldn't mind so much if after posting the extra information that people requested they at least do the courtsey of saying they can't help.
Can anyone suggest what may be slowing down my backup?
-
It is funny how it only took 1 and 2 hours for people to critise my post, yet a week goes by and no one can help me. I wouldn't mind so much if after posting the extra information that people requested they at least do the courtsey of saying they can't help.
I'm sorry but I don't have the know how to help you either. But I will say that this is all to common within the community. I have been a member since v4 and have been victim of this sort of behaviour.
All I can suggest is ... persist and someone will help sooner or later
-
Thanks teviot I appreciate your thoughts.
-
Here is what I would expect to see time wise, showing my work:
Data: approx 575GB = (575 x 1024 for gigabyte to megabyte conversion) 588800MB
Estimated write speed: 25MB/s
Raw backup time: (Data / Write Speed) 23552 seconds or (/3600 to convert to hours) 6.54 hours
Obviously there would be verification time however I still can't see how it could take over 24hours to do my backup.
So with all that could anyone help in working out why my backup is taking so long.
I have about 500GB of data and using the workstation-dar backup within the server-manager that takes about 20 hours backing up to a smb share. From what I can work out I think its because you are compressing on the fly, much is to do with cpu resources before it even hits the drive, when you start a backup the data is in a temporary directory:
<server-name>.<domain-name>
tmp_dir
Once its completed the backup all the .dar files are then moved in to the <server-name>.<domain-name> directory.
What is your current compression ?
So depending on the size of the hard drive you have in the USB caddy you might be able to use 0 or 1 compression.
-
The thing was that the backup has only started going slow once I changed to a bigger backup drive, none of the other settings had changed. I have a 1TB drive previously but as the backup was now over 500GB it was not big enough, thus the need for the change. I had the compression level set at 8 and the backup would take about eight (8) hours normally, but the time now is ridiculus.
To test the theory I have changed the compression level to 0 and have set the backup to start in five (5) minutes, I see how it goes.
-
I had changed the compression to 0 to test the theory of the problem being CPU bound, but alas this is not the case. I still believe it is throughput (write speed) of the USB drive that is the problem how I can not determine the speed of the drive as the dd commands I have tried don't seem to give all the information.
Any other thoughts?
-
girkers
I had a somewhat related issue recently. I was transferring backup files from a USB drive to a mounted SATA drive. The copy was going extremely slowly. Upon investigation the transfer rate was only like 1Mb/s. Both drives were obviously detected OK etc.
I disconnected the USB drive, powered it down, waited a bit and reconnected it.
When I copied the files again between mounted drives I had more like 27Mb/s rate.
I just used mc to move the files (approx 200 1.5Gb sized files), and it showed the transfer rate on the mc screen.
So you could check real world transfer rates that way, and perhaps try reconnecting the USB drive.
I didn't bother to investigate why it ran at such a slow speed as that is the only time it had ever happened.
-
I just did a test with MC and found that the write speed of the drives starts out great but then slows down, so it would appear that the drive is definately the problem. I have another drive that is identical that I haven't used yet that I am just formatting and setting up. I did a quick read about block size and on the original drive I did not set any, so on this 2TB drive I have set it to 4096.
Could anyone else suggest if the format of the drive would have any bearing on the speed, or any other suggestion what so ever?
-
girkers
Write speed slows down to what ?
Format your drives as ext 3.
-
Thanks for the help mary.
The drive is formatted as ext3 as I follow http://wiki.contribs.org/USBDisks, here is the output of the new drive I just connected:
[root@caine media]# mkfs.ext3 -b 4096 -L Backup /dev/sde1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=Backup
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
244203520 inodes, 488378000 blocks
24418900 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=490733568
14905 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Is there a definitive way to test the write speed as I have said the dd command to test write speed doesn't tell me like it normal show it should on the net. I will see how this drive performs overnight. And if still slow I will test with MC and give speeds and file sizes.
-
iostat will give you a helping hand to see if there are any bottle necks.
-
Alright backup is currently running and I found and installed iostat. iostat was not in any of the standard repositories so I found a link on this website: http://www.sme-server.de/download/sme7/contribs/smecontribs/index.html (http://www.sme-server.de/download/sme7/contribs/smecontribs/index.html) and you want to find: sysstat-5.0.5-25.el4.i386.rpm
Once installed I did a few tests with different timings and here is a copy of a short run, with my backup drive being sde and an ext3 partition at sde1
[root@caine tmp]# iostat -kx 4 5
Linux 2.6.9-89.0.25.ELsmp (caine) 06/25/2010
avg-cpu: %user %nice %sys %iowait %idle
1.13 0.00 0.57 58.94 39.36
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 69.40 1.73 4.15 1.66 588.40 27.14 294.20 13.57 105.84 0.04 6.45 2.34 1.36
sda1 0.00 0.00 0.00 0.00 0.02 0.00 0.01 0.00 49.09 0.00 5.87 5.76 0.00
sda2 69.39 1.73 4.15 1.66 588.38 27.14 294.19 13.57 105.85 0.04 6.45 2.34 1.36
sdb 69.38 1.72 4.06 1.79 587.59 28.05 293.80 14.02 105.22 0.04 6.23 2.40 1.41
sdb1 0.00 0.00 0.00 0.00 0.02 0.00 0.01 0.00 51.50 0.00 7.56 7.41 0.00
sdb2 69.38 1.72 4.06 1.79 587.57 28.05 293.78 14.02 105.22 0.04 6.23 2.40 1.41
sdc 0.01 0.00 0.00 0.71 0.03 5.70 0.02 2.85 8.04 0.00 0.33 0.33 0.02
sdc1 0.00 0.00 0.00 0.00 0.02 0.00 0.01 0.00 48.61 0.00 3.24 3.13 0.00
sdc2 0.00 0.00 0.00 0.71 0.00 5.70 0.00 2.85 8.01 0.00 0.33 0.33 0.02
sdd 69.43 1.69 4.01 1.83 587.51 28.18 293.75 14.09 105.32 0.02 3.43 2.00 1.17
sdd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 33.50 0.00 3.89 3.89 0.00
sdd2 69.42 1.69 4.01 1.83 587.50 28.18 293.75 14.09 105.32 0.02 3.43 2.00 1.17
md1 0.00 0.00 0.00 0.00 0.05 0.00 0.03 0.00 20.06 0.00 0.00 0.00 0.00
md2 0.00 0.00 14.46 4.36 1741.63 34.90 870.81 17.45 94.40 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 14.34 4.23 1740.69 33.83 870.34 16.91 95.57 0.18 9.96 1.60 2.97
dm-1 0.00 0.00 0.12 0.13 0.93 1.08 0.47 0.54 8.00 0.00 8.13 0.38 0.01
sde 0.00 6.27 0.00 0.23 0.01 52.02 0.00 26.01 226.12 0.25 1078.53 8.64 0.20
sde1 0.00 6.27 0.00 0.23 0.01 52.02 0.00 26.01 226.12 0.25 1078.57 8.64 0.20
avg-cpu: %user %nice %sys %iowait %idle
39.97 0.00 2.38 20.18 37.47
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 443.11 8.52 34.09 1.50 3817.54 80.20 1908.77 40.10 109.52 0.09 2.63 2.49 8.87
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda2 443.11 8.52 34.09 1.50 3817.54 80.20 1908.77 40.10 109.52 0.09 2.63 2.49 8.87
sdb 436.09 13.28 31.33 1.50 3739.35 118.30 1869.67 59.15 117.50 0.08 2.53 2.36 7.74
sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb2 436.09 13.28 31.33 1.50 3739.35 118.30 1869.67 59.15 117.50 0.08 2.53 2.36 7.74
sdc 0.00 0.00 0.00 0.50 0.00 4.01 0.00 2.01 8.00 0.00 0.00 0.00 0.00
sdc1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc2 0.00 0.00 0.00 0.50 0.00 4.01 0.00 2.01 8.00 0.00 0.00 0.00 0.00
sdd 480.45 4.76 37.34 1.00 4142.36 46.12 2071.18 23.06 109.23 0.11 2.81 2.64 10.13
sdd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd2 480.45 4.76 37.34 1.00 4142.36 46.12 2071.18 23.06 109.23 0.11 2.81 2.64 10.13
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 96.74 14.54 11582.96 116.29 5791.48 58.15 105.14 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 96.74 14.54 11582.96 116.29 5791.48 58.15 105.14 0.83 7.43 2.10 23.36
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 316.04 0.00 52.38 0.00 2630.58 0.00 1315.29 50.22 21.92 557.71 5.60 29.35
sde1 0.00 316.04 0.00 52.38 0.00 2630.58 0.00 1315.29 50.22 21.92 557.71 5.60 29.35
avg-cpu: %user %nice %sys %iowait %idle
38.52 0.00 2.51 19.95 39.02
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 393.72 10.80 32.16 1.26 3407.04 96.48 1703.52 48.24 104.84 0.09 2.77 2.60 8.69
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda2 393.72 10.80 32.16 1.26 3407.04 96.48 1703.52 48.24 104.84 0.09 2.77 2.60 8.69
sdb 449.75 10.80 35.68 1.01 3885.43 94.47 1942.71 47.24 108.49 0.11 3.12 3.12 11.43
sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb2 449.75 10.80 35.68 1.01 3885.43 94.47 1942.71 47.24 108.49 0.11 3.12 3.12 11.43
sdc 0.00 0.00 0.00 0.50 0.00 4.02 0.00 2.01 8.00 0.00 0.50 0.50 0.03
sdc1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc2 0.00 0.00 0.00 0.50 0.00 4.02 0.00 2.01 8.00 0.00 0.50 0.50 0.03
sdd 442.96 0.00 34.67 0.75 3823.12 6.03 1911.56 3.02 108.09 0.10 2.82 2.82 9.97
sdd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd2 442.96 0.00 34.67 0.75 3823.12 6.03 1911.56 3.02 108.09 0.10 2.82 2.82 9.97
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 98.24 11.56 11023.12 92.46 5511.56 46.23 101.24 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 98.24 11.56 11023.12 92.46 5511.56 46.23 101.24 0.64 5.84 2.45 26.88
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 1104.52 0.00 41.96 0.00 9171.86 0.00 4585.93 218.59 18.26 435.22 5.81 24.37
sde1 0.00 1104.52 0.00 41.96 0.00 9171.86 0.00 4585.93 218.59 18.26 435.22 5.81 24.37
avg-cpu: %user %nice %sys %iowait %idle
39.60 0.00 2.63 24.31 33.46
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 419.05 19.30 36.09 7.52 3641.10 214.54 1820.55 107.27 88.41 1.52 34.94 3.37 14.71
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda2 419.05 19.30 36.09 7.52 3641.10 214.54 1820.55 107.27 88.41 1.52 34.94 3.37 14.71
sdb 411.53 20.05 31.83 9.77 3544.86 238.60 1772.43 119.30 90.94 0.98 23.49 3.50 14.56
sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb2 411.53 20.05 31.83 9.77 3544.86 238.60 1772.43 119.30 90.94 0.98 23.49 3.50 14.56
sdc 0.00 0.00 0.00 1.00 0.00 8.02 0.00 4.01 8.00 0.00 0.25 0.25 0.03
sdc1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc2 0.00 0.00 0.00 1.00 0.00 8.02 0.00 4.01 8.00 0.00 0.25 0.25 0.03
sdd 412.28 13.28 33.58 11.03 3564.91 194.49 1782.46 97.24 84.27 0.26 5.79 2.68 11.95
sdd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd2 412.28 13.28 33.58 11.03 3564.91 194.49 1782.46 97.24 84.27 0.26 5.79 2.68 11.95
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 84.96 39.35 10450.13 314.79 5225.06 157.39 86.60 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 84.96 39.35 10450.13 314.79 5225.06 157.39 86.60 5.05 40.59 2.29 28.47
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 1112.03 0.00 41.10 0.00 9225.06 0.00 4612.53 224.44 24.81 603.71 7.39 30.38
sde1 0.00 1112.03 0.00 41.10 0.00 9225.06 0.00 4612.53 224.44 24.81 603.71 7.39 30.38
avg-cpu: %user %nice %sys %iowait %idle
33.46 0.00 2.52 27.92 36.10
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 382.21 12.53 37.84 2.01 3360.40 116.29 1680.20 58.15 87.25 0.13 3.26 3.07 12.23
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda2 382.21 12.53 37.84 2.01 3360.40 116.29 1680.20 58.15 87.25 0.13 3.26 3.07 12.23
sdb 423.06 0.75 40.60 1.00 3709.27 14.04 1854.64 7.02 89.49 0.12 2.83 2.82 11.73
sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb2 423.06 0.75 40.60 1.00 3709.27 14.04 1854.64 7.02 89.49 0.12 2.83 2.82 11.73
sdc 0.00 0.00 0.00 0.50 0.00 4.01 0.00 2.01 8.00 0.00 0.00 0.00 0.00
sdc1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc2 0.00 0.00 0.00 0.50 0.00 4.01 0.00 2.01 8.00 0.00 0.00 0.00 0.00
sdd 381.20 11.78 35.34 2.01 3332.33 110.28 1666.17 55.14 92.19 0.11 3.00 3.00 11.20
sdd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd2 381.20 11.78 35.34 2.01 3332.33 110.28 1666.17 55.14 92.19 0.11 3.00 3.00 11.20
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 105.51 14.29 10287.72 114.29 5143.86 57.14 86.83 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 105.51 14.29 10287.72 114.29 5143.86 57.14 86.83 0.73 6.07 2.74 32.86
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 982.21 0.00 36.09 0.00 8146.37 0.00 4073.18 225.72 15.02 416.21 6.72 24.26
sde1 0.00 982.21 0.00 36.09 0.00 8146.37 0.00 4073.18 225.72 15.02 416.21 6.72 24.26
Whilst some of the values vary, I did notice that I dod have a lot of bottlenecks with iowait. This is confirmed when I run top, as wa value is what is left of the 100% not used by the other processors, so it was up in the 80s & 90s constantly.
So I did a little bit of googling and found that High Wait with iowait is a known problem for people with some versions of the kernel. Obviously updating the kernel is not really an option, but could anyone offer some advice as to how to remove the wait?
I am "waiting" in anticipation :P
-
girkers
iostat was not in any of the standard repositories so I found a link on this website: http://www.sme-server.de/download/sme7/contribs/smecontribs/index.html (http://www.sme-server.de/download/sme7/contribs/smecontribs/index.html) and you want to find: sysstat-5.0.5-25.el4.i386.rpm
You could have just done
yum install --enablerepo=smecontribs sysstat
-
mary,
I actually tried with the base repository as I found a forum post that was where it was, however it did not find it and it is also not in any of the standard repositories. I perhaps would had tried your command line if it was documented anywhere but alas it was not. Thanks for the assist, however now that I have it installed and I have confirmed what the problem is where to know?
Thanks
-
girkers
The block size should not be an issue. makefs will auto determine the best block size for your drive. 4096 sounds correct and was the default value on my 1Tb USB drive.
You still do not tell us/me what your sustained transfer rate was when copying a large file from your server HDD to your USB drive eg mount the USB drive & copy a 1Gb file and note the sustained steady transfer rate that you see in the mc copy window.
Note whether your system is busy or not using say htop or top -i
Ideally you would do this copy test under quiet conditions when the server is idling and doing very little, otherwise other system tasks will distort the result.
This should give you a real world transfer speed that the components are capable of.
-
I simple sugestion: compare your internal HDD and your external HDD (USB) speed using hdparm.
Use: hdparm -tT /dev/hda
See below my results and the other server:
my own server wit Pentium III 1GHz + 500GB IDE
[root@lobo ~]# hdparm -tT /dev/hda
/dev/hda:
Timing cached reads: 488 MB in 2.04 seconds = 238.67 MB/sec
Timing buffered disk reads: 96 MB in 3.08 seconds = 31.13 MB/sec
another server, a Dell PE1900 with 2x SCSI HDD
[root@pantera ~]# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 10020 MB in 2.00 seconds = 5015.78 MB/sec
Timing buffered disk reads: 202 MB in 3.03 seconds = 66.68 MB/sec
Sorry I have no external USB HDD to try.
Good luck
Jáder
-
I did a test as mary suggested and transferring a 1.4GB the speed in the end of the transfer is 55MB/s so I know the external drive is capable of the transfer speed needed. From my testing with iostat I found that the transfer speed starts out fine, but slow done due to iowait issues. At least I know it is not the drive speeds.
So anyone know how I can diagnose high iowait problems that brings my transfer rate down to about 1MB/s to my external HDD.
Thanks
-
Why does your Backup took so long?
1.) Because your USB Drive is slow. The physical speed is slower than the theoretical speed.
2.) Because your CPU must compress the files and create one archive file.
3.) Because your Server has not multiple CPU´s like an professional Server (8-32 or more CPU´s)
4.) Because your RAM is too little.
My SME take four hours for 200GB only copy, not compress. It´s an E8400 Core2Duo with 4GB RAM.
If your Server must be faster, so you need another Hardware.
The speed you have, is absolutelly normal.
Igi
-
Igi2003,
Thanks for your thoughts, but unfortunately you are wrong on most counts. This hardware I am backing up on was doing the backup in about 8 hours with a different USB drive, bought bigger USB drive and backup not working properly. So:
1.) Drive is not slow as I showed with the post just prior to yours, 55MB/s to transfer 1.4GB file
2.) I have changed the compression to 0 and it made no difference
3.) It is a dual Celeron (yes not the best) but the backup has work previously so I don't believe to be an issue
4.) I have 4GB RAM and my system is stock standard SME with zarafa contrib and 3 users and only 1 heavy mail user.
So anyone else with some thoughts.
-
If it is so, then you have an Problem between your USB Controller and your USB HDD and their controller.
Check this with an different USB2.0 Controller Card whitch has an other Chipset as your Onboard Controller..
After update, your server is running the SMP Kernel and use both CPU´s or the UP Kernel with One CPU use?
Igi
PS. My WD500 USB HDD won´t work correctly with VIA USB Chipset. With Intel Chipset, no Problems.
-
If you looked at my post at the top of page 2 you would have seen that I am using Linux 2.6.9-89.0.25.ELsmp, here is the details of my CPU:
[root@caine ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Celeron(R) CPU E1400 @ 2.00GHz
stepping : 13
cpu MHz : 2000.000
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl est tm2 xtpr
bogomips : 4002.38
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Celeron(R) CPU E1400 @ 2.00GHz
stepping : 13
cpu MHz : 2000.000
cache size : 512 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl est tm2 xtpr
bogomips : 3999.90
I get what you are saying about chipsets and USB but if there was a compatibility problem normal transfers would be affected which they are not. Also I was previously using Western Digital drives and that is what I am using again now.