Koozali.org: home of the SME Server

MIni-itx - Network throughput

laurie_lewis

MIni-itx - Network throughput
« on: August 04, 2005, 01:00:04 PM »
I am trying to do a comparison with other people who are using a via mini-itx motherboard as their server.  I have recently reinstalled 6.01 and I am sure that my network performance has dropped,

System
Via M1000 Moterboard
512 Meg Ram
2nd network card fitted (used for internet)
4 IDE Hard drives fitted
hda    System  80 gig  2mb cache
hdb1   Data 200 gig 2mb cache (video)
hdc1   Data 80 gig 2 mb cache  (Music)
hdd1   Data 40 gig

I have 5 computers, a media player and a print server hanging off a 100 mb hub (not a switch).  At this time I am getting the network maxing out around 2.5 to 3mb/sec.  I see the occassional peak of around 6mb/s.  At the same time the cpu is somewhere between 40 -80%.   Using System monitor to get these figures.  To get this I have 3 users watching dvds on the hard drive at the same time.  any more and it starts to break down.

I have only installed Dansguardian, dungog anit-virus and mail contribs and finally the windows update cache.

Can anyone else using this type of motherboard give me an idea of what througput you are getting please.

I would be very interested in anyone who has a dual processor mini-itx board running.

Thanks

Laurie

cc_skavenger

MIni-itx - Network throughput
« Reply #1 on: August 04, 2005, 03:54:26 PM »
just out of curiosity, what is your hard drive throughput?

laurie_lewis

MIni-itx - Network throughput
« Reply #2 on: August 05, 2005, 08:06:07 AM »
Not sure how you would measure this.  I am only taking the figures from system monitor for the network - nothing there for hard drives other than space used etc.  

I have thought that the hard drives might be a bit of a bottle neck but I did not think they would be that restrictive.  If you can tell me how to measure hard drive throughput I would like to actually see it myself.

Laurie

cc_skavenger

MIni-itx - Network throughput
« Reply #3 on: August 05, 2005, 08:12:14 AM »
sure, at the command prompt, type:

hdparm -Tt /dev/hdX
where X is a for the primary master, b is for the primary slave, c is the secondary master, and d is for the secondary slave.

If you installed 2 drives in raid 1, then you would use the command:

hdparm -Tt /dev/md0
I am thinking that you might need to tweak the hdparm settings to make the controller perform better.

laurie_lewis

MIni-itx - Network throughput
« Reply #4 on: August 05, 2005, 08:27:45 AM »
Ok,  Here is what I got

(80 gig SAMSUNG SV8004H (Capacity: 74.56 GB))
/dev/hda:
 Timing buffer-cache reads:   128 MB in  1.03 seconds =124.27 MB/sec
 Timing buffered disk reads:  64 MB in  2.06 seconds = 31.07 MB/sec
[root@kennel /]# hdparm -Tt /dev/hdb

(200 gig Western Digital WDC WD2000BB-55GUA0 (Capacity: 186.31 GB))
/dev/hdb:
 Timing buffer-cache reads:   128 MB in  0.98 seconds =130.61 MB/sec
 Timing buffered disk reads:  64 MB in 23.50 seconds =  2.72 MB/sec
[root@kennel /]# hdparm -Tt /dev/hdd

(80 gig Seagate ST380011A (Capacity: 74.53 GB))
/dev/hdd:
 Timing buffer-cache reads:   128 MB in  1.02 seconds =125.49 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.24 MB/sec

I have removed the 40 gig to make room for another bigger drive.

/dev/hdb jumps out but I have no idea why.  would it run better as a primary drive??

Laurie

cc_skavenger

MIni-itx - Network throughput
« Reply #5 on: August 05, 2005, 08:33:41 AM »
Is hdb where you have the multimedia?

try entering this at the command line:
hdparm -m16 -c3 -d1 /dev/hdb
and try the command hdparm -Tt /dev/hdb again.

laurie_lewis

MIni-itx - Network throughput
« Reply #6 on: August 05, 2005, 08:56:26 AM »
OK


I have video files only on /dev/hdb.  I have music files only on /dev/hdd.

The following is what I got back.

hdparm -m16 -c3 -d1 /dev/hdb

/dev/hdb:
 setting 32-bit I/O support flag to 3
 setting multcount to 16
 setting using_dma to 1 (on)
 multcount    = 16 (on)
 I/O support  =  3 (32-bit w/sync)
 using_dma    =  1 (on)
[root@kennel /]# hdparm -Tt /dev/hdb

/dev/hdb:
 Timing buffer-cache reads:   128 MB in  1.05 seconds =121.90 MB/sec
 Timing buffered disk reads:  64 MB in  2.35 seconds = 27.23 MB/sec


I can see a huge difference in the last test.

Keep going I am impressed and happy at this time.  I will now try and see if the network performance improves, though I thought the hard drives might have been the problem and was copying and reading a common file to all of them.

Ta

laurie

cc_skavenger

MIni-itx - Network throughput
« Reply #7 on: August 05, 2005, 04:27:49 PM »
ok, what type of interface is the controller, 33, 66, 100, or 133?  Same for the hard drive, 33, 66, 100, 133?  Are you using all ata 100/133 cables?

Sorry,
I am kind of using your setup for experimentation.  I am thinking of setting up one of these boards with SME and I am trying to see what it can do.

Offline psoren

  • *
  • 371
  • +0/-0
MIni-itx - Network throughput
« Reply #8 on: August 05, 2005, 05:13:04 PM »
Quote from: "cc_skavenger"
ok, what type of interface is the controller, 33, 66, 100, or 133?  Same for the hard drive, 33, 66, 100, 133?  Are you using all ata 100/133 cables?

Sorry,
I am kind of using your setup for experimentation.  I am thinking of setting up one of these boards with SME and I am trying to see what it can do.


Hi
I'm running my SME on a MiniITX with two WD 200G in software RAID 1 on a Promise controller (Not RAID) I will check my drives as soon as i get access to my server, probably after the weekend. I am restricted by firewall at the moment, so i can't use SSH. It will be interesting to compare.
How does yours look Marco?

Per

cc_skavenger

MIni-itx - Network throughput
« Reply #9 on: August 05, 2005, 05:44:54 PM »
psoren,
I don't have the hardware yet.  It is just an idea.  

Your computer, on the other hand is probably working fine.  The promise ide controllers are supported pretty well in SME.  I currently use ultra 100s & 133s in my caching gateway servers and my mail servers.  They worked great out of the box.

cc_skavenger

MIni-itx - Network throughput
« Reply #10 on: August 05, 2005, 08:02:40 PM »
Duh,
should have posted a link to what we are doing for reference.

FYI,
All of the settings we are going through are documented here:
http://forums.contribs.org/index.php?topic=28038.msg116562#msg116562

A pdf can be downloaded from here:
http://www.ccskavenger.info/SME/Howtos/
and the direct link is here:
http://www.ccskavenger.info/SME/Howtos/hdparm-optimization.pdf

laurie_lewis

MIni-itx - Network throughput
« Reply #11 on: August 06, 2005, 06:10:18 AM »
I am only using normal ide cables.  hda (samsung)/hdb (WD) both show up as 33 drives and hdd shows up as 100 (seagate).

I am now using -m16 -c1 -u1 -d1 -X68 on the Western Digital (hdb) and it is giving 128/35 mb/s.  I have not had to touch the seagate  still getting 128/56.

The Samsung (hda) is giving 128/34.  It will not let me turn dma on - system hangs.

The two drives only giving me 34 ish are both ont he one channel whilst the seagate is by itself ont he second ide channel.  Could this be impacting on performance?

Now getting 6.5 mb/sec over the network to a local hard drive.  3.2 mb/sec from network drive to network drive.  CPU running about 40%.

Much better than I was getting but any other ideas for improvement.

I was thinking that the I should make the seagate the system drive (hda) due to its better performance.  since the other drives are mainly only being read for data they should cope with most needs.  Comments?

Ta
Laurie

cc_skavenger

MIni-itx - Network throughput
« Reply #12 on: August 06, 2005, 07:00:14 AM »
is this an internet gateway also?  If so, then yes I would put the faster drive as the system drive.

Hope this has helped the system performance so that more users can share the media experience.

Offline MSmith

  • *
  • 675
  • +0/-0
MIni-itx - Network throughput
« Reply #13 on: August 08, 2005, 06:56:08 AM »
Switch to 80-conductor IDE cables, definitely!  Have you checked your BIOS settings to be sure you're getting the maximum UDMA transfer rate?  Have you set master/slave or allowed cable select?  If the latter, are both drives on a given channel set to CS?  Have you disabled any unneeded onboard items such as sound, extra COM ports, etc?  Are you running the latest BIOS?  Does the VIA have onboard video and if so, have you stepped it down to the least possible resource usage, including AGP buffering?  And last but not least, how about swapping out that HUB for a SWITCH?  Here's an 8-port gigabit switch for a paltry $102 USD ...

http://www.directron.com/gs108na.html

And an Intel gigabit card to slap into the Via:

http://www.directron.com/pwla8391gtblk.html

Finally, Darrell May's excellent build of Intel drivers, including gigabit:

http://mirror.contribs.org/smeserver/contribs/dmay/smeserver/6.x/contrib/intel/

Note:  I make no claims regarding the aforementioned drivers actually working with the Intel NIC specified; it may be a later version but it is a PRO/1000 and May's driver supports those.
...

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
HDPARM results
« Reply #14 on: August 11, 2005, 12:52:41 AM »
I have a Mini ITX EPIA 933 (Falcon CR51) with 512MB RAM and 200GB HDD.
Here is my HDPARM results

[root@lobo root]# hdparm -T /dev/hda

/dev/hda:
 Timing buffer-cache reads:   128 MB in  2.01 seconds = 63.68 MB/sec
[root@lobo root]# hdparm -t /dev/hda

/dev/hda:
 Timing buffered disk reads:  64 MB in  1.62 seconds = 39.51 MB/sec


[root@lobo root]# hdparm /dev/hda

/dev/hda:
 multcount    = 16 (on)
 I/O support  =  1 (32-bit)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 nowerr       =  0 (off)
 readonly     =  0 (off)
 readahead    =  8 (on)
 geometry     = 24321/255/63, sectors = 390721968, start = 0
 busstate     =  1 (on)
[root@lobo root]# hdparm -i /dev/hda

/dev/hda:

 Model=ST3200822A, FwRev=3.01, SerialNo=5LJ0NWCQ
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
 BuffType=unknown, BuffSize=8192kB, MaxMultSect=16, MultSect=16
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
 IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes: pio0 pio1 pio2 pio3 pio4
 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
 AdvancedPM=no WriteCache=enabled
 Drive Supports : mediumATA-1 ATA-2 ATA-3 ATA-4 ATA-5 ATA-6
...

Offline JonB

  • *
  • 351
  • +0/-0
MIni-itx - Network throughput
« Reply #15 on: August 11, 2005, 03:35:05 AM »
I have used SME on Via Epias in Sereniti cases several times. I have in both cases replaced them with different motherboards and cases.

I found that the Power Supplies in the Sereniti cases were not up to the job when running 2 Hard Drives. While it still works the first thing that gets affected when the P/S starts sagging is the onboard Via Rhine network card. It either stops working or slows down.

Hanging a 350-400W ATX power supply outside the case improves things no end.

The same thing happens if you try to run 2 H/Ds off a mini ITX case with a 12V - ATX converter. These are 70W at the best and eventually fail if you try to drive to much power through them.

JonB
...

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
MIni-itx - Network throughput
« Reply #16 on: August 11, 2005, 10:50:53 PM »
Hmm, I have only one disk in a serenheti case. The on board via nic is very slow though. I thought that would be because of a driver problem. I posted a topic in the experienced forum about that.

I get no faster throughput than 70kB/s to the server itsselve by ftp. However to the internet I get speeds upto 200KB/s. Weird, I do not understand it at all.

[root@idsnew root]# hdparm -Tt /dev/hda

/dev/hda:
 Timing buffer-cache reads:   128 MB in  2.14 seconds = 59.81 MB/sec
 Timing buffered disk reads:  64 MB in 17.28 seconds =  3.70 MB/sec


What module do you use for the via nic?

greetings

Hans-Cees
nl.linkedin.com/in/hanscees/

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
Mini ITX - HDPARM
« Reply #17 on: August 13, 2005, 03:12:15 PM »
About:
Quote

[root@idsnew root]# hdparm -Tt /dev/hda
/dev/hda:
Timing buffer-cache reads: 128 MB in 2.14 seconds =  59.81 MB/sec
Timing buffered disk reads: 64 MB in 17.28 seconds = 3.70 MB/sec


I think you should try: hdparm -d1 /dev/hda
and retest! You´ll be impressed.
If you like I could help you, just post the results of:
hdparm -i /dev/hda
hdparm    /dev/hda
...

laurie_lewis

MIni-itx - Network throughput
« Reply #18 on: August 13, 2005, 03:29:48 PM »
hanscees, try the settings that cc_skavenger gave earlier in the thread.  I got a lot better performance when I played with them.  I also found that when I paired up my Seagate and Western Digital drives I got better performance than when paired with a samsung drive.  Don't ask me why.  I am getting arount 55/56 mb/sec from the seagate and western digital using a ata 100 cable.

After reading the comments about power and the on board network adapter I did a bit of an experiment.  With the via Rhine adapter I was getting 6.9 to 7 mb/sec throughput.  when I swapped across the a micronix based network card I only got 6.4 to 6.5.

I am running my via motherboard off a 250W atx power supply and not having any problems.  It has three hard drives in it and is pulling a whopping 47 watts.  It does get up to 75 during boot but then settles down.  If I have a monitor attached and boot it jumps to 230 watts - obviously running the monitor off the same power supply.  You can now get 200w dc power supplies that fit the motherboards if you are frightened about being under powered.

Laurie

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
Re: Mini ITX - HDPARM
« Reply #19 on: August 14, 2005, 05:06:42 PM »
Quote from: "jader"
About:
Quote

[root@idsnew root]# hdparm -Tt /dev/hda
/dev/hda:
Timing buffer-cache reads: 128 MB in 2.14 seconds =  59.81 MB/sec
Timing buffered disk reads: 64 MB in 17.28 seconds = 3.70 MB/sec


I think you should try: hdparm -d1 /dev/hda
and retest! You´ll be impressed.
If you like I could help you, just post the results of:
hdparm -i /dev/hda
hdparm    /dev/hda


I have no luck here:


/dev/hda:
 Timing buffer-cache reads:   128 MB in  2.24 seconds = 57.14 MB/sec
 Timing buffered disk reads:  64 MB in 16.36 seconds =  3.91 MB/sec
[root@idsnew root]#  hdparm -d1 /dev/hda

/dev/hda:
 setting using_dma to 1 (on)
 using_dma    =  1 (on)
[root@idsnew root]# hdparm -Tt /dev/hda

/dev/hda:
 Timing buffer-cache reads:   128 MB in  2.10 seconds = 60.95 MB/sec
 Timing buffered disk reads:  64 MB in 16.61 seconds =  3.85 MB/sec
[root@idsnew root]#

So please help:-)
[root@idsnew root]# hdparm -i /dev/hda

/dev/hda:

 Model=WDC WD400LB-00DNA0, FwRev=77.07W77, SerialNo=WD-WMAH81135080
 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq }
 RawCHS=16383/16/63, TrkSize=57600, SectSize=600, ECCbytes=74
 BuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=16
 CurCHS=65535/1/63, CurSects=4128705, LBA=yes, LBAsects=78165360
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes: pio0 pio1 pio2 pio3 pio4
 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
 AdvancedPM=no WriteCache=enabled
 Drive Supports : Reserved : ATA-1 ATA-2 ATA-3 ATA-4 ATA-5 ATA-6

[root@idsnew root]# hdparm    /dev/hda

/dev/hda:
 multcount    = 16 (on)
 I/O support  =  1 (32-bit)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 nowerr       =  0 (off)
 readonly     =  0 (off)
 readahead    =  8 (on)
 geometry     = 4865/255/63, sectors = 78165360, start = 0
 busstate     =  1 (on)
nl.linkedin.com/in/hanscees/

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
MIni-itx - Network throughput
« Reply #20 on: August 14, 2005, 09:29:40 PM »
hum... this is VERY strange:
Quote

 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5

Note the "*" on UDMA 5 ... so your HDD is capable of that! And with UDMA 5 you should get at least 20MB/sec

Are you sure you´re using a 80way HDD data cable on your Western Digital 40GB HDD?
This is the only device on cable?
One BIOS:
Are the UltraDMA (or something like that) enabled?
It´s IDE Prefetch mode enabled?
...

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
MIni-itx - Network throughput
« Reply #21 on: August 15, 2005, 09:51:12 PM »
Quote from: "jader"
hum... this is VERY strange:
Quote

 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5

Note the "*" on UDMA 5 ... so your HDD is capable of that! And with UDMA 5 you should get at least 20MB/sec

Are you sure you´re using a 80way HDD data cable on your Western Digital 40GB HDD?
This is the only device on cable?
One BIOS:
Are the UltraDMA (or something like that) enabled?
It´s IDE Prefetch mode enabled?


I am afraid I checked everything. I am using another ide cable now, the stiff kind for ata 133.

I changed the disk to be on the end of the ide cable: it was halfway: no difference though.


I checked the bios, and dma is on and prefetching also (or rather write-behind caching).

So I am out of options here.

Is it possible that a more heavy power supply might help? I can try that for a test. Will a normal psu fit on the mini-itx? Or is that not likely?

greetings

Hans-Cees
nl.linkedin.com/in/hanscees/

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
HDD SLOW
« Reply #22 on: August 15, 2005, 11:55:37 PM »
Hi

I´m running out of options here too! :)
Let´s start from basic: (it´s boring... but just in case)!

1) It´s the blue side of connector on motherboard?
2) have you double checked the DMA is ON in BIOS and on linux?
3) there is no other device on this cable?

I don´t think the PSU has something to do with your problem. I have a MiniITX 933 mhz running on a 677R case with a 55W MAX PSU!!!

I just saw problems when all these were ok in a VERY OLD K6 motherboard... it need a BIOS upgrade.

Good luck!

Jáder
...

Offline arne

  • *****
  • 1,116
  • +0/-4
MIni-itx - Network throughput
« Reply #23 on: August 16, 2005, 03:53:14 AM »
I have to admit that I have never tried a ITX motherboard (but I would like to do it one day), and I have not set up a sme server as a gateway for years. On the other hand I use the sme server as a server and I have sat up some Linux gateway using other distributions than the SME.

In general - I think there is two completely different ways of doing the firewall and gateway function when it comes to speed and prosessor load.

If you do it the "kernel firewalling way" the load on the hardware will be very little. There will be generally no load or trafic to the harddisk at all. As a general prinsipple "all" functions will be done via internal datatransport in the Linux kernel and via datas that is stored in the ram. Using this prinsipple it will not mather what harddisk you are using because the harddisks will not take part of the trafic load.

Then it is the other way of doing "partly the same job", the "web caching way" via squid. If you choose this solution the load on the hardware will increase, I think, quite dramatically. You will be dependent on relatively high prosessor speed and fast harddisks.

Even though I have not tested any ITX motherboards I think it should have quite enough rerources to to the job "the kernel firewalling way". On the other way, I would not expect the IPX boards to be strong enough to carry the load from a web cache with some trafic (??!!) I would expect any harware with "not so high performance" would slow down the trafic if used as a web cache.

If you set up a gateway "manually" using Centos or something like that it is easy to know how it works because it is just manual configurations. I don't know how the sme server handles this by default, if you for instanse can turn "on or off" or "connect in or connect out" a web caching function (Squid) you can also choose the amount of prosessor/hardware load and the effective speed.

I can see that there is mentioned the use of some content filtering functions in the question that can only be done using a web proxy. I would not expect the ITX board to be strong enough to handle that.

On the other hand if it were used as a netfilter gateway (linux kernel) with some server functions like those contained in the sme server and no "active use of" the web proxy function (Squid), I would expect that a PC with at performance of an ITX could do the job quite well.

I have tried different "low performance" PC's as gateway. I think that even a Pentium 166 can do the kernel firewlling thing, but to make a speedy web cache that realy does the job, you will need some hardware power.

Dont know if you can use anything of this, just some ideas ..

Best reg Arne
......

Offline arne

  • *****
  • 1,116
  • +0/-4
MIni-itx - Network throughput
« Reply #24 on: August 16, 2005, 04:00:22 AM »
.. Or to say it in some words less ..

If the web proxy function (and then also the content filtering) could be turned off so the trafic will pass troug the sme gateway as a ordinary nat router, the load should go down and the speed should increase.

Arne
......

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
MIni-itx - Network throughput
« Reply #25 on: August 16, 2005, 09:57:53 AM »
Quote from: "arne"
.. Or to say it in some words less ..

If the web proxy function (and then also the content filtering) could be turned off so the trafic will pass troug the sme gateway as a ordinary nat router, the load should go down and the speed should increase.

Arne


I do not use squid. I do not have problems with internet speed.

I have two problems:
- internal via nic-interface is slow and drops packets
when copying to disk large files.
- internal hd is slow.

I have doublechecked dma settings in bios and os. Cable is ok and with blue ribbon in the motherboard.

I will try to do some more tests:
- ftp to machine with "put file /dev/null" to see if nic itsself is a problem.
- do some dd tests on the harddisk.
- check irq's in dmesg.

If nothing is found I will try another harddisk and another psu.

I will be back:-(

Hans-Cees
nl.linkedin.com/in/hanscees/

laurie_lewis

MIni-itx - Network throughput
« Reply #26 on: August 16, 2005, 11:22:32 AM »
Hans-Cees,

I am using Dans Guardian on my server with the Clam AV supposedly scanning everything that is moving around.

The figures I quoted about transfers are whilst dansguardian is running.

the hdparm figures should not change not matter what you are running and I would suggest they may be a key issue in your network performance.  As I said earlier why not try and drop another hard drive in and just see how it performs.  This will not require a new install.  Just put it on the other channel and do the tests for /dev/hdc.  I looked at system monitor on mine to check the cpu usage during the network usage just to make sure the cpu is not running at 100% to see if that was a problem.

You have not indicated in any of the previous posts (from memory) what itx motherboard/processor you are using.  That might help others?

Laurie

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
MIni-itx - Network throughput
« Reply #27 on: August 21, 2005, 12:31:45 AM »
Hi, it took a while to do all kinds of tests.

First of all, I have a via epi-v mainboard
http://www.viavpsd.com/product/Download.jsp?motherboardId=141

The problems I have are:
harddisk is very slow in hdparm -Tt
via-rhine nic is also very slow

I did some more testing and this is the result:

The via-rhine nic is only slow from the local network to the server (upto 30kB/s). From the server (and thus internet) to the local network it runs 6MB/s. Still slow for a 100mbit full duplex card.

This problem "tastes" very much like a halfduplex/full duplex problem I have seen many times on networks. But all settings here are fine. I did quite some testing and am convinced the via-rhine network driver is faulty with my nic.
I installed a sme 7alpha26 and there the same problem was present.

The harddisk speed remains a mystery. In hdparm -Tt is is very slow.
I installed another harddisk with sme7 alpha26 (kernel 2.6) and it was identical. I got readings like:
:
Timing buffer-cache reads: 128 MB in 2.14 seconds = 90.xx MB/sec
Timing buffered disk reads: 64 MB in 17.28 seconds = 3.70 MB/sec

So the timing buffered disk reads were still extremely slow.

I think I should consider buying another motherboard?

Hans-Cees
nl.linkedin.com/in/hanscees/

laurie_lewis

MIni-itx - Network throughput
« Reply #28 on: August 22, 2005, 07:46:58 AM »
Hi Hans-Cees,

Out of curiosity I put an old hard drive (ibm 4 gig) onto a mini-itx V series motherboard to see how it  performed.

I got 7.3 MB/Sec - system running SME 7 beta 1.  Not startling but still better than yours.

My other stats I was giving was off a Epia-M series board.

How are you finding 7 beta - about to change my server over to it tongight.

Laurie

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
MIni-itx - Network throughput
« Reply #29 on: August 27, 2005, 12:16:55 AM »
Quote from: "laurie_lewis"
Hi Hans-Cees,

Out of curiosity I put an old hard drive (ibm 4 gig) onto a mini-itx V series motherboard to see how it  performed.

I got 7.3 MB/Sec - system running SME 7 beta 1.  Not startling but still better than yours.

My other stats I was giving was off a Epia-M series board.

How are you finding 7 beta - about to change my server over to it tongight.

Laurie


Hi Laurie,

I am thinking of upgrading to beta seven, but will probably wait a little bit longer. Although I am downing beta 7/2 now.

As for the hardware, I can let the board go into repair, but apparantly that will take six weeks.

I hope to get some more information of people here about if they have no problem with the via nics, also not when they look close (is it performing well two-ways). I am a bit afraid of buying a new board only to find the same problems.

Hans-Cees
nl.linkedin.com/in/hanscees/

gmphoto

MIni-itx - Network throughput
« Reply #30 on: August 27, 2005, 12:29:44 PM »
I have a Via V8000 with SME 6.0.1-01 installed as server only. It has 2 x 20GB laptop hard drives with software raid 1 connected to a Promise IDE controller (from memory) in the only available PCI slot.

This is the result hdparm -Tt /dev/md0

/dev/md0:
 Timing buffer-cache reads:   128 MB in  2.02 seconds = 63.37 MB/sec
 Timing buffered disk reads:  64 MB in  2.57 seconds = 24.90 MB/sec

Hope this is of use.

laurie_lewis

MIni-itx - Network throughput
« Reply #31 on: August 27, 2005, 01:41:01 PM »
Hi Hans-Cees,

I installed 7 beta and it seemed to work fine.  Did not see any change in network performance etc.  Played with it for a couple of hours and after not being able to get some of the contribs working that I wanted I decided to go back to 6.01 for a bit longer yet.

I wouldn't be put off the itx motherboards.  As a home server they work great and pull so little power that you hardly know they are working.  I have read a little about the dual processor board that they are going to or have recently put out - some stats on how one of those works with SME would be great to see.

Laurie

Offline CharlieBrady

  • *
  • 6,918
  • +3/-0
MIni-itx - Network throughput
« Reply #32 on: August 28, 2005, 07:57:38 PM »
Quote from: "laurie_lewis"

I installed 7 beta and it seemed to work fine.  Did not see any change in network performance etc.  Played with it for a couple of hours and after not being able to get some of the contribs working that I wanted I decided to go back to 6.01 for a bit longer yet.


Please add your comments to the wiki about which contribs didn't work (and how/why they didn't work).

Thanks.

Offline hanscees

  • *
  • 267
  • +0/-0
    • nl.linkedin.com/in/hanscees/
MIni-itx - Network throughput
« Reply #33 on: August 29, 2005, 08:35:06 PM »
Quote from: "CharlieBrady"
Quote from: "laurie_lewis"

I installed 7 beta and it seemed to work fine.  Did not see any change in network performance etc.  Played with it for a couple of hours and after not being able to get some of the contribs working that I wanted I decided to go back to 6.01 for a bit longer yet.


Please add your comments to the wiki about which contribs didn't work (and how/why they didn't work).

Thanks.


where? I find the wiki a bit hard to work with.

Found it: made a topic about it in experienced forum.


Hc
nl.linkedin.com/in/hanscees/

gmphoto

Slow network and hard drive speed epia v8000
« Reply #34 on: November 01, 2005, 07:11:57 AM »
Quote from: "gmphoto"
I have a Via V8000 with SME 6.0.1-01 installed as server only. It has 2 x 20GB laptop hard drives with software raid 1 connected to a Promise IDE controller (from memory) in the only available PCI slot.

This is the result hdparm -Tt /dev/md0

/dev/md0:
 Timing buffer-cache reads:   128 MB in  2.02 seconds = 63.37 MB/sec
 Timing buffered disk reads:  64 MB in  2.57 seconds = 24.90 MB/sec

Hope this is of use.


I know it's a bit dumb quoting your own post but I thought I would follow this topic up as it I had a problem exactly the same as hanscees and similar to that of laurie_lewis.

I think I have found a solution to the slow network and hard drive problem when an extra network card is installed. The problem and solution are covered here http://forums.viaarena.com/messageview.aspx?catid=32&threadid=52574&highlight_key=y&keyword1=v8000 and http://forums.viaarena.com/messageview.aspx?catid=32&threadid=44698&STARTPAGE=2&FTVAR_FORUMVIEWTMP=Linear

This came about because I was running my Via V8000 as server only and reinstalled SME 6.01 in server/gateway mode and had to install another network card. The performance of the network and of the hard drive went south so I did a little digging and found that others had had the same issue.

I installed the bios update as per the quoted post and all seems a lot more snappy. My hard drive figures went form woeful to acceptable

hdparm -Tt /dev/hda

/dev/hda:
 Timing buffer-cache reads:   128 MB in  1.99 seconds = 64.32 MB/sec
 Timing buffered disk reads:  64 MB in  2.68 seconds = 23.88 MB/sec

 and the network seems to be running at around 25-30 mbits/sec (which is a lot better than the 15 to 30 Kb/sec it was running at).

ethtool reveals

ethtool eth0
Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Current message level: 0x00000001 (1)
        Link detected: yes

which is as expected.

Have just transferred around 10 gb of info to this box without a drama at the above mentioned speeds.

Gary