Koozali.org: home of the SME Server
Contribs.org Forums => General Discussion => Topic started by: Mike on March 02, 2014, 07:21:33 AM
-
I have setup an SME-Server for someone and counting back I think she is already using the SME Server for more then 10 years now.
At some point I setup a Raid1 with 2 Seagate Barracuda ES.2 1TByte drives to make the data more secure.
But through the years I have had to replace drives quite a number of times.
Now I have advised her that it would be better to go to SSD drives as they are much more reliable but investigating this option showed that there are some issues which I am not sure how they apply to an SME Server especially if I would use 2 of those SSD drives in a Raid1 setup.
For one, ATA TRIM-support is fully supported from Kernel 2.6.33 and SME Server 9 will use 2.6.32.
If you are using SME Servers for such a long time you know that it can take a long time before SME Server 9 would step up from 2.6.32 to 2.6.33.
And here: http://en.wikipedia.org/wiki/TRIM#ATA ; I read that there is also a problem with using 2 SSD drives in a Raid1 setup.
In sub section RAID issues Red Hat has recommended against using software RAID levels 1, 4, 5, and 6 on SSDs.
They also write about dmraid and mdraid and all of that got me to see with double vision.
Now it would be nice if one of the Technical guys at Contribs.org would give their point of view on such a setup on an SME Server.
Right now there is not much to be found on SME Servers using SSD's especially not in a Raid1 configuration and It would be nice if this thread would give more insight for other users thinking of stepping up to SSD's on an SME Server and if there could be drawbacks to that.
-
Just my first hand experinece, I do run a development box (ESXi) with a single Sandisk Extreme 120GB SSD and two WD HDs as datastore. Also tried proxmox in same config. perfomance is fine, the OS and the VMs, well to me anyway.
Have also run the same setup with SME8 on the SSD and storgae to actual HDs, trying to limit the writes to the SSD again performance was more than fine.
The only real answer is going to be from somone who loads up a system with SSDs only and runs it till it stops..might take a while tho..
-
I know it is likely to work, that is not the question here.
The point is that testing the functionality in a realworld situation is one thing.
Understanding the theory and being able to translate how this would impact the functionality on an SME Server with a Kernel that does not fully support SSD's or what problems can arise in time if using SSD's in a RAID1 configuration, is something entirely different.
There are enough people that buy a computer and an SSD without knowing if this configuration will work with a specified OS.
An IT specialist will try to check first if the hardware and software combination will most likely work before he orders the hardware.
That is a completely different approach.
What I want to know is if I buy for instance 2 Samsung 840 EVO SSD's and install them in her old server, if she will get performance issues over time because the Kernel does not fully support SSD's, mainly the TRIM function of the SSD.
From this (http://en.wikipedia.org/wiki/TRIM#ATA) site I got this:
Red Hat has also recommended against using software RAID levels 1, 4, 5, and 6 on SSDs, because during initialization, most RAID management utilities (e.g. Linux's mdadm) write to all blocks on the devices to ensure that checksums (or drive-to-drive verifies, in the case of RAID 1) operate properly, causing the SSD to believe that all blocks other than in the spare area are in use, significantly degrading performance.
Does this mean that RAID1 functions differently on SSD's as on normal harddrives?
Does this mean this will give you a degraded performance that is unacceptable or will this occur in time.
I cannot spend other peoples money if I do not have a certain level of certainty that this configuration will not give problems in time.
-
I know it is likely to work, that is not the question here.
-----
I cannot spend other peoples money if I do not have a certain level of certainty that this configuration will not give problems in time.
Excuse me for the misanswer. I'm sure as an IT Specialists you will find the answer.
-
Mike
Did you search these Forums & Bugzilla on SSD ?
That will be the likely extent of currently published knowledge on contribs.org re using SSD's with SME server.
If you think there is a problem or kernel issue, then raise a bug.
Maybe developers will look at the kernel issues once you bring them to peoples attention.
Remember not to be too demanding, everyone works for free here, & I assume you are getting paid for what you do.
Wait for answers, people get out of bed at different times around the world.
-
All the production servers except 2 are using Raid 1 on SSD drives.
Because I have not had any problems with the Intel SSD drives, that is what I use.
I understand the garage collection is well on those drives.
I try to not format the whole SSD drives. To do that, I install SMe on a smaller drive than the final SSD drive that will be used. So I Installed on a 180 gig drive, then mirror that on a 240 gig drive then removed the 180 drive off the raid 1 set and put in another 240 gig drive then built the mirror again leaving me with a raid 1 set with 2 intel 240 gig drives.
This approach will leave space on the drive unformatted for,the purpose of garbage collection. You do not want to fill a SSD with data on purpose or accidentally.
I have been using SSD now since maybe two years and no failures and very happy.
I turned off updating the file's last accessed.
I removed the weekly raid check.
I keep good UPS equipment on the servers. This cost about 100 dollars a year for each hard metal server.
Trim is not suppose to br run on any raid setup as far as I know on any operating system.
I use SSD to keep the server power use low and to have fast local proceesing of files.
Because some software is written poorly on windows clients. I have been hoping that the sever runs faster on some file access intensive programs but there has not been much speed in performance if the client uses SMB protocol.
The Linux server does a nice job of serving up SMB protocol clients and if the client is able to use oplocking to their benefit, the SSD upgrade will not show much improvement.
If the server is keep in an area with poor ventialization. SSD drives have been a God send along with their quiet operation. If you also have a smaller sever physical wise. I would,try the SSD.
Intel SSD 530 540'gig drives are now in the 120 dollar range.
I did pull a drive and checked its life time ware. I could not see any issues.
We do not use our servers for large backups either.
I do not use AHCI but IDE mode.
-
@purvis
What an exciting post - thank you for sharing your experience after 2 years using SSD's on SMEserver.
Every line of your post is fascinating to me.
I have some questions (please excuse my numbering then - for clarity):
(1) What proportion of your servers using SSD's are running SME8 / SME8.1 / SME9?
(2) Your third line reads "I understand the garage collection is well on those drives." ... I am assuming this should be 'garbage' ... which leads on to my next question
(3) You mention "garbage collection" elsewhere in the post - what exactly is this 'garbage' and how much gets 'collected'?
(4) "I turned off updating the file's last accessed." How did you do this?
(5) "I removed the weekly raid check" How did you do this?
(6) Which Intel SSD's have you successfully used? The 530 series?
Sorry for all the questions.
Here in the UK we have the following Intel SSD's on the market - 40GB, 120GB, 180GB, 240GB, 480GB, 600GB.
The two sizes that neatly lend themselves to your set-up method are 180GB/240GB (75%) and 480GB/600GB (80%)
Intel SSD's are more expensive here in the UK - 480GB (GBP274 = $430) and 600GB (GBP418 = $650). Hopefully the prices will continue to drop.
-
Have you seen this manual on SME 9.0 with 2 SSD's in a RAID1 setup:
http://wiki.contribs.org/Raid:2_SSD%27s_in_a_RAID1_setup_with_Over_Provisioning
-
@Mike
Thank you for that link - somehow I had missed that.
That is a very detailed HowTo.
-
Charles2008
I am running SME 8.1
for garbage collection google "garbage collection" and "SSD"
As for the 99-raid-check do a forum search for text "99-raid-check" and user "purvis"
I am using the Intel 530 series 180 gig and 240 gig and the Samsung SSD Pro 128 and 256
on last accessed file stamping, edit the fstab file "/etc/fstab"
i added "noatime,nodiratime" to lines in the file
for good or bad here is my fstab file
#------------------------------------------------------------
# BE CAREFUL WHEN MODIFYING THIS FILE! It is updated automatically
# by the SME server software. A few entries are updated during
# the template processing of the file and white space is removed,
# but otherwise changes to the file are preserved.
# For more information, see http://www.e-smith.org/custom/ and
# the template fragments in /etc/e-smith/templates/etc/fstab/.
#
# copyright (C) 2002 Mitel Networks Corporation
#------------------------------------------------------------
/dev/main/root / ext3 usrquota,grpquota,noatime,nodiratime 1 1
/dev/md1 /boot ext3 defaults,noatime,nodiratime 1 2
tmpfs /dev/shm tmpfs defaults,noatime,nodiratime 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults,noatime,nodiratime 0 0
proc /proc proc defaults,noatime,nodiratime 0 0
/dev/main/swap swap swap defaults,noatime,nodiratime 0 0
-
Hi Purvis
Interesting, I didn't know about the noatime nodiratime settings yet.
I googled for "noatime nodiratime ssd" and now found this link: https://wiki.freeswitch.org/wiki/SSD_Tuning_for_Linux
They have a good explanation of things.
Look on this page for the term "Swappiness" which looks like a good idea to make these settings too.
I keep finding and hearing new things about SSD's.
I love SSD’s but you never seem to know everything there is to know about those things.
Since I embraced the SSD fully I haven’t had one bad disk anymore where I used them and I have 5 of them and 2 in some remote server that I support.
Some of my SSD's are over 3 years old now and I have them in Windows, Linux and even in a FreeBSD system.
I only use harddisks if I need much storage space and so where SSD's are still too damn expensive but I can’t wait for larger SSD’s to become cheaper as they get bigger because harddisks get faulty way too fast compared to SSD's if you ask me.
I also found this interesting link: http://www.webhostingtalk.com/showthread.php?t=1330071
SME Server 9.0 is built on CentOS which is actually the source of Redhat.
Unfortunately even SME 9.0 uses Kernel 2.6.32-504.1.3.el6.i686 although Redhat compiles the kernel with many adjustments/patches.
Without a Redhat account it is almost impossible to find out what is in that kernel and what is not.
Charlie Brady knows more about that than I do.
Charlie is all about stability which is good of cause but unfortunately the downside is that kernel improvements might take longer to reach the SME Server.
Mike
-
For what it is worth.
I have the same fstab edits on non SSD setups as well.
There was not much to go on as far as others experience on SSD with SME.
It is likely my drives are not aligned to 4096 boundaries either.
I had run Windows XP without aligning the boundaries for awhile as well.
I decided to keep more history files off my main work servers.
After all. There where HDD from previous use and newer drives got cheaper.