Koozali.org: home of the SME Server
Obsolete Releases => SME Server 9.x => Topic started by: Bud on March 15, 2018, 11:14:07 AM
-
guys please can you help
i have an sme 9.2 server with a 1 x Gigabit NIC
my questions are
1. If i add an additional 1 x Gigabit NIC on the Server will this help with faster file mail/file transfer/access?
2. how do i bridge the two nic's to support this?
3. any other suggestions?
The main reason is because i am having to add more users for remote mail access and VPN access
My local users are reporting that the server is slow at times.
The server currently supports over 100 Users
thank you for your support :-)
-
Probably be better to describe what is your current issue you are experiencing :)
-
You need to identify the bottleneck first.
-
NICBonding ( this is what you want to do) allows to bond 2 identical ethernet cards on the LAN side.
This could improve the local speed transfer, if and only if the network card is the bottle neck.
Other bottle necks to identify outside the server :
- poor cabling ( including defective cable, poor quality cable, use of udp cable without insulation...)
- 10/1000 Mbits switch
- poor configuration of managed switch
- defective switch
- bad design of the network architecture with multiple switch with load not optimized
Bottle neck on the server:
- slow disks ( usually we seek for 7500 or 10000 RPM or better, 5400 RPM will slow your access)
- saturated or insuffisant memory
- saturated CPU utilization
As you want to add user with VPN ( and assuming you run in server gateway) this will not affect the transfer on your LAN card, so bonding a second card or not, the fact to add those users will be not affect the current situation for local user.
If your issue is load on memory, hard drives, CPU , yes the new users /load will affect the situation, and the NIC will not improve it.
On a similar way, if your issue is related to the cabling/switch part, adding a new NIC will do nothing to improve.
There might be some other element I forgot, but you will have to investigate first, to repeat the two previous comments!
-
Even if your system is network-bound, link bonding is pretty hack-ish at best. Fortunately, 10G gear has been on the market long enough that it's getting fairly reasonably priced. You might want to consider upgrading your SME box to a 10G NIC (a Chelsio T420 used on eBay isn't too expensive, or an older S310 is really cheap) and putting a 10G switch into your infrastructure instead.
-
putting a 10G switch into your infrastructure instead.
that is the expensive part about 1-4k$
without saying you will need the cabling with Category 6A or Augmented classe E
-
that is the expensive part about 1-4k$
Not at all--I just bought a Dell X1052 switch (48xGbE, 4xSFP+) for under $500. The last 10G switch I bought before that was a Dell PowerConnect 5524; that one was about $200 (admittedly used). My assumption is that no single client machine needs more than 1 Gb/sec, but that a number of clients, cumulatively, could easily exceed that.
And there's no need to change the entire cable infrastructure for a single host--one cable from the host to the switch is all you need.
-
Not at all--I just bought a Dell X1052 switch (48xGbE, 4xSFP+) for under $500. The last 10G switch I bought before that was a Dell PowerConnect 5524; that one was about $200 (admittedly used). My assumption is that no single client machine needs more than 1 Gb/sec, but that a number of clients, cumulatively, could easily exceed that.
And there's no need to change the entire cable infrastructure for a single host--one cable from the host to the switch is all you need.
well indeed, was thinking of a full 10GBP. 48 x 10/100/1000 + 4 x 10 Gigabit SFP+ is more than enough,
even if you have a distributed infrastructure with let's say one switch per floor. Only in that case you still have to plan the the 10Gb compatible cable between the switches.
-
Still need to identify the bottleneck.
What sort of server hardware are we actually talking about?
-
Bud
2. how do i bridge the two nic's to support this?
No one answered this exactly.
NIC bonding is supported by SME server ONLY in "server only" mode.
If running in gateway server mode then this functionality is not supported.
Both NICs must be exactly identical, & they must be a brand/model that supports NIC bonding.
Also AFAIR your hub must support NIC bonding.
To enable NIC bonding run the admin console & select Configure this server, then select NIC bonding in the appropriate screen, then Save & Restart.
Remember this is applicable to "server only" mode.
-
NICBonding ( this is what you want to do) allows to bond 2 identical ethernet cards on the LAN side.
This could improve the local speed transfer, if and only if the network card is the bottle neck.
Bud
No one answered this exactly.
NIC bonding is supported by SME server ONLY in "server only" mode.
If running in gateway server mode then this functionality is not supported.
I started to answer it, but your answer is much more complete.
As far as I have searched this feature is not well documented on the wiki. Am I right ?
I have openned this bug https://bugs.contribs.org/show_bug.cgi?id=10540
Your explanation and a screen shot in the administration section could be a great improvement.