Koozali.org: home of the SME Server

Only 7Mbyte transfer per second

jvdbossc

Only 7Mbyte transfer per second
« on: April 23, 2004, 04:11:00 PM »
Has anyone any idea why I get only 7Mbyte transfer per second,(ftp local network)

I use a switch, and my windows xp network card (recent intel card) is running in full duplex mode.  

There is a switch in between.  How can I check/force the linux side to use full duplex? (not sure in what mode there)

Sme Server has a 3c905c.

Anybody any suggestions.

(the cables between the switch have the correct color code)

jvdbossc

forgot something
« Reply #1 on: April 23, 2004, 04:11:50 PM »
The network cards are 100mbit cards...

Offline hardijs

  • ****
  • 77
  • +0/-0
ftp is slow
« Reply #2 on: April 23, 2004, 04:19:26 PM »
I've read tests where ftp download was almost 2 times slower than http.
as far as I remember it was something about protocols ability to transfer data concurently etc.

How did you test (ie software and transfer filesize)?
what does hdparm -tT /dev/[discname]
return?

jvdbossc

Reply Part 1,
« Reply #3 on: April 23, 2004, 05:01:45 PM »
The hdparm output will have to wait a bit (until I get home)

It might be a good suggestion I think.(that's because I noticed that files from my ntfs volume (compressed) only get 6Mbyte per second to the SME server.  (with the same file)

The kernel says something about  2 x4 mb on the drives, but it should be 2x8mb.  (RAID 1, software kernel)

The test file was a movie of about 670MB

I used filezilla.  It's an open source ftp program.  (and more)

The explorer and smb & is a big joke when it comes to transfer large data. :) Not the speed, but it is not reliable.  

I want to go out without thinking that transfer stops....

(got all data of discs with robocopy.exe (windows resource kit) very good..)

I did not try with other protocols, because my idea was that ftp was the most quick solution, to transfer 80 gb back to my server.

http://www.snapfiles.com/get/filezilla.html

(First I was thinking of using a second card,(and split transfers over 2 networks) but nobody had an idea of realizing this)
 
Thanks for reply, will try asap.

jvdbossc

hdparm
« Reply #4 on: April 23, 2004, 06:16:33 PM »
This is the output... of the hdparm

login as: root
root@wingedraid's password:
Last login: Thu Apr 22 18:52:58 2004 from pc-00243.wingedraid.dyndns.org
Welcome to the Mitel Networks SME Server.
[root@wingedraid root]# hdparm -tT /dev/hda0
/dev/hda0: No such file or directory
[root@wingedraid root]# hdparm -tT /dev/hda1

/dev/hda1:
 Timing buffer-cache reads:   128 MB in  1.56 seconds = 82.05 MB/sec
 Timing buffered disk reads:  64 MB in  1.53 seconds = 41.83 MB/sec
Hmm.. suspicious results: probably not enough free memory for a proper test.
[root@wingedraid root]# hdparm -tT /dev/hda2

/dev/hda2:
 Timing buffer-cache reads:   128 MB in  1.55 seconds = 82.58 MB/sec
 Timing buffered disk reads:  64 MB in  1.53 seconds = 41.83 MB/sec
Hmm.. suspicious results: probably not enough free memory for a proper test.
[root@wingedraid root]# hdparm -tT /dev/md0

/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.53 seconds = 83.66 MB/sec
 Timing buffered disk reads:  64 MB in  1.76 seconds = 36.36 MB/sec
[root@wingedraid root]# hdparm -tT /dev/md1

/dev/md1:
 Timing buffer-cache reads:   128 MB in  1.55 seconds = 82.58 MB/sec
 Timing buffered disk reads:  64 MB in  1.30 seconds = 49.23 MB/sec
Hmm.. suspicious results: probably not enough free memory for a proper test.
[root@wingedraid root]# hdparm -tT /dev/md2

/dev/md2:
 Timing buffer-cache reads:   128 MB in  1.52 seconds = 84.21 MB/sec
 Timing buffered disk reads:  64 MB in  2.14 seconds = 29.91 MB/sec
[root@wingedraid root]#

Offline Boris

  • *
  • 783
  • +0/-0
Only 7Mbyte transfer per second
« Reply #5 on: April 23, 2004, 09:02:05 PM »
7Mbyte x 8 =56Mbit/s not the best, but not outrages either.
That is about half of the total bandwidth (100 Mbit/s) available. The best i've seen is about 80%.
Server efficiency depends not only of NIC, and drive speed, but also on amount of RAM for caching for example. Also try to open few simultaneous sessions and calculate the total network speed.
...

jvdbossc

Found some solution
« Reply #6 on: April 24, 2004, 12:54:17 PM »
Indeed, I should test it with more than one connection.

I found that results vary...Even when doing the same tests, with the samen configuration, with the same file, the speed is never exactly the same!

 ;-) The machine I was running it on was a my second pc for years.  

It contained a celeron 333 that was a gift together with a newer motherboard.(I used it for my study)

It had been running red hat 6 & 7 & 8 for years, using the celeron at a higher clock speed. (as smb server)  

Original when posting:
333(66)
Overclocked (config before sme server)
500(100)

Because of the RAID 1, and the new disks, I was thinking not to risk the disks or processor getting hot.

When the processor runs at the higher clock speed, I get at least 8.6 MB, and uploading starts at higher speeds.....

I tested this at least 5 times now :) and ever time the processor runs at a higher bus speed, the increase happens...

I am going to look for a faster processor.  I never thought that a celeron was not fast enough for a one user fileserver, and would slow down ethernet..

Offline hardijs

  • ****
  • 77
  • +0/-0
hdparm & memory
« Reply #7 on: April 25, 2004, 08:01:27 PM »
I think this time hdparm may  indicate something that has to be taken care of:
>Hmm.. suspicious results: probably not enough free memory for a proper test.

this usually is the case where the system really is RAM starved.
just for testing try to plug some additional 128 MB ram in  there and see if anything changes.

though I really doubt that you will get much better speeds.

Also software  RAID1 will take it's "time" so the throughput will be less than in a singledrive system.

Offline hardijs

  • ****
  • 77
  • +0/-0
also....
« Reply #8 on: April 25, 2004, 08:08:33 PM »
see
http://forums.contribs.org/index.php?topic=22031.0
the test there is not "scientific" (ie stopwatch for a largedile) though it gives some clue.
the results max somewhere where hdparm unbuffered transfer goes.
so SCSI UW disc may give faster and sw raid1 will give less throughput.

Offline hardijs

  • ****
  • 77
  • +0/-0
stopwatch
« Reply #9 on: April 25, 2004, 08:30:26 PM »
the same file in all tests:
1.58 GB (where GB = 1024MB etc)
atalk 2:46 (9.74MB/sec)
samba (aka windows networking 6:26 (4.19 MB/sec)
ftp 2:55 (9.26 MB/s)

And that's on a full duplex Gigabit net.
Also this was from a MacOsx (g5 plentyOramâ„¢ etc - so this maybe why atalk was skewed... - though at the same time I have to say that OSX->winxp smb flies....)
And one last thing - these are write tests

jvdbossc

smb speed
« Reply #10 on: April 28, 2004, 07:54:35 PM »
I did a test today and the speed transferring one large big file over smb, did not complete because i lost patience.

Transferring 19xfiles with a total of 1,85mb took me 15 seconds.

I know that is not good.

I am going to see if windows xp is to blaim and restore an early clean install image of my workstation.

jvdbossc

better results
« Reply #11 on: April 28, 2004, 09:59:33 PM »
I re-installed a clean image (with all windows services disabled) and smb gives now a tranfer of 7.9mb calculated with a 720mb big file..

Offline hardijs

  • ****
  • 77
  • +0/-0
looks like a memory issue to me
« Reply #12 on: April 29, 2004, 09:07:36 AM »
just try adding a substantial amount of memory - 128 or 256 MB added would show something.