Koozali.org: home of the SME Server
Legacy Forums => Experienced User Forum => Topic started by: dmajwool on April 07, 2004, 05:00:13 PM
-
Hi
I use SME 6.0 to store files for an audio workstation application running on XPpro. Projects contain about 1,500 wav files totalling about 1 GB.
I reported to the audio software manufacturer that the app (SADiE) was slow when creating a batch of waveform files from the audio files. Each waveform file (3KB) is created, but then there is an 8 or 9 second wait until the next one in the batch is created.
They have come back to me quoting this MS article:
Remote Directory Lists Are Slower Than Local Directory Lists
http://support.microsoft.com/default.aspx?scid=kb;en-us;177266&Product=win2000
Which describes a registry hack to increase a buffer size in an MS OS. Apparently this gives a very much faster directory reading performance.
He suggests that SME performance may be improved by finding a similar, parallel fix.
The SME is already listing large directories to my workstation 4 times faster than a folder on one of my other XP Pro boxes, but is listing something that can be optimised for my application?
Many thanks
David
-
have you turned on hdparm?
just a thought
-
have you turned on hdparm?
Not yet. I don't know much about it.
Do you know where to look for suitable hdparm parameters for the 3ware escalade 8 port sata raid (5) card which is the only storage on this machine.
-
1. try [to google "hdparm" and thus find - I just do not remember the link] a text from O'Reily - the basic principles are there.
2. try "hdparm -Tt /dev/hd[yourdrive]" that may give you a lot
(plain hdparm /dev/hd[drive] gives current settings)
>hdparm /dev/hda
/dev/hda:
multcount = 16 (on)
I/O support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 1 (on)
keepsettings = 0 (off)
nowerr = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 4865/255/63, sectors = 78165360, start = 0
busstate = 1 (on)
>hdparm -tT /dev/hda
/dev/hda:
Timing buffer-cache reads: 128 MB in 0.85 seconds =150.59 MB/sec
Timing buffered disk reads: 64 MB in 1.36 seconds = 47.06 MB/sec
anything that is less than 16MB @Tbdr and 40MB/s @Tbcr will benefit from the hdaprm parameter tuning.
3. serch this forum,. there was a write to the global db that enabled hdparm if need be
Please take care that you may get system hangups if incorrect X parameters are used - though this is one that affects the most.
Try a lot and try hdparm -tT
-
Many thanks for your help. I will search for the O'Reilly text.
2. try "hdparm -Tt /dev/hd[yourdrive]" that may give you a lot
(plain hdparm /dev/hd[drive] gives current settings)
>hdparm /dev/sda
/dev/sda:
readonly = 0 (off)
geometry = 212808/255/63, sectors = -876202240, start = 0
>hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 128 MB in 0.39 seconds =328.21 MB/sec
Timing buffered disk reads: 64 MB in 1.18 seconds = 54.24 MB/sec
anything that is less than 16MB @Tbdr and 40MB/s @Tbcr will benefit from the hdaprm parameter tuning.
Try a lot and try hdparm -tT
So I guess I'm fast enough without tweaking hdparm? I think the original concern was more to do with listing response time rather than raw transfer speed.
BTW, the Windows hack quoted in my original post has a remarkable effect on Windows XP acting as a server.
Thanks
David
-
IMHO
-c get/set IDE 32-bit IO setting
-d get/set using_dma flag
-m get/set multiple sector count
-X set IDE xfer mode (DANGEROUS - of course it may screw your discs up and usually hang the thing though the benefit of dma is by using this setting say -X64 upto -X68 or so is worth of trying)
If the settings proves good then
-k get/set keep_settings_over_reset flag (0/1)
-I - let's you know what setting the drive does support (say dma up to UDMA6 or whatever)
-
just a quick idea - it looks like the culprit may be in the SMB and the smb version sme is using.
Have you tried a different connection protocol (like nfs or atalk?).
I am quite interested - this is what I may be facing in a month.
-
so the drive is scsi.
Now which card are you using?
Are the drives terminated (properly)?
What kind of network are you using 10bt-100bt-1Gbt?
the 10 or even the 100b network is unable to serve the drives you are using - no data regarding the 1G though it may help much more.
(100/10 =10 MB/sec)
try increasing packet size (like jumbo packets of 9000 - see ifconfig)
say on the same network OSX-winXP gives full (I have switches not hubs) 9.6 MB sec throughput XP-XP - somehow gets less - around 7.2 and OSX ->sme crawls at around 5.6 if using atalk and less if smb.
Also the large file transfer suckalot.
(this is on a 100bt - I am awaiting the final 1G switch so the backbone will be 1G on the local net - I have installed a 1G netcard in sme :lol: so waiting to see how goes)
try copying a large file and a large directory full of manymany files to see if there is a difference in throughput - in my opinion sme as NAS is a no go sofar.
-
switched to gigabit switched network ( the RTL-8169)
pure transfer quite **** (a not so good)
hdparm -tT gives 160/43 MB sec
transfer through net goes up to 8 MB sec now compared to 4-6 before (this is using smb - strangely atalk now goes upto 20 and stayed without dropouts at 8-9 compared to 6-7 before- ie at least 2 times the smb)
does anyone have good experience with a file servers that could work for audio/video (ie no live useage from /to the server though minimal transfer times are a requirement)