Koozali.org: home of the SME Server
Contribs.org Forums => General Discussion => Topic started by: Stefano on December 21, 2009, 11:12:03 PM
-
Hi all..
I'm studying a storage solution for a customer of mine and I'm planning to use a HP Ml310 G5 server with 4 2TB hd in raid5 (no spare)
So, basically, my question is: will SME's setup be able to create and format such a big array? what's the best practice in this case?
TIA
Stefano
-
I'm studying a storage solution for a customer of mine and I'm planning to use a HP Ml310 G5 server with 4 2TB hd in raid5 (no spare)
So, basically, my question is: will SME's setup be able to create and format such a big array?
Plan it... just don't implement it;~)
Been there. Format takes too long (days);~/
Recovery from a single drive failure RAID5 leaves system
vulnerable to a second drive failure in the duration...
it happens and always in the worst possible scenario.
Best practice in this case?
Ask Shad?
Personally I would (I have already done so)
installed a hardware RAID card for the bulk data and whatever
small reliable drives in conventional SME softward RAID.
IMHO the SME community will have to readjust its
immediate mantra (software RAID over hardware RAID)
in the light of the bigger and bigger drives nowadays.
-
first of all, thank you for your answer..
ok, let's say I use a hw raid controller (3ware) to create my raid5 array (*): will SME's installer work?
(*) about
Recovery from a single drive failure RAID5 leaves system
vulnerable to a second drive failure in the duration...
I would be in the same situation even with raid1.. :wink:
-
ok, let's say I use a hw raid controller (3ware) to create my raid5 array (*): will SME's installer work?
I didn't try that. Ask the devs. It's all about your own
view on risk management. Losing my server is bad enough,
losing my data library would be much much worse. I have
always backed up the server and my library data in
different but appropriate ways. YMMV.
I would be in the same situation even with raid1.. :wink:
Granted;~) Recovery/format/sync takes FAR FAR FAR
too long believe me. Last time I tried (v early SME8 beta)
off the top of my head just the installer format took
something like five days... with four 1TB drives RAID5
nospare but I forget just what CPUs flavour it was at
that time but it would've been reasonably fast stuff.
The sheer logistics of data management and the
necessary transfer of same become unsustainable.
Big TB stuff needs a re-think. Go with good h/w RAID5
or better (6 etc) and spare a thought whether your
UPS can cover things appropriately in the timescale
of power episodes (h/w RAID card + internal battery)
-
Maybe you can go with a big NAS and mount its network shares inside sme ibays on a smaller server ,
A NAS like this looks nice :
http://www.newegg.com/Product/Product.aspx?Item=N82E16822107019 (http://www.newegg.com/Product/Product.aspx?Item=N82E16822107019)
6 hot swap disks , raid 6 , low power , UPS support , and so on...
If i remember correctly QNAP has some kind of replication between devices so you can back it up to another NAS in a diffrent location
-
Stefano
...let's say I use a hw raid controller (3ware) to create my raid5 array (*): will SME's installer work?
As long as the hardware is configured correctly in "BIOS", and supported by the sme OS, then the controller card presents itself to the sme installer as a single disk hard drive, and you chose that option (ie single hard drive) rather than a software drive management option.
Maintenance & repair of the array is then done at the controller card BIOS level, usually selected during the boot up stage.
I'm unfamiliar with such large RAID arrays, but the comments made about format times of days, are suggestive of the drives not being configured correctly in the system BIOS. Sounds like they are running slowly, but my guess may be incorrect.
-
I'm unfamiliar with such large RAID arrays, but the comments made about format times of days, are suggestive of the drives not being configured correctly in the system BIOS.
Days ~ BIOS was configured correctly.
Sounds like they are running slowly, but my guess may be incorrect.
Incorrect and correct in that order.
-
If I'm going to do anything that requires over 2TB of storage I always go with a hardware raid card (or external iscsi box). My largest two sites right now are runing iscsi with 16 x 1TB drives. I've got them setup as raid-6 + 2 hot spares effectively giving me between 11 and 12 TB of usable space. With a good controller a rebuild of a single failed drive takes about 8 hours and fail back of the revertable hot-spare takes about 3hrs.
My next largest site is running an areca raid controller with 6 x 1TB and 6 x 320Gb drives. Fail/rebuild/revert times on this controller are about the same as the high end iscsi boxes. If you data is critical it is imperative you have a battery backup as well as at least 1 hot spare drive. I won't run anything but raid-6 as I've been the victim of multiple drive failures within a small window too many times.
-
hank you all for your answers..
just a curiosity: is SME's fdisk gpt compliant or not? is there any limitation in partition size?
and, slords, how did you implement iscsi on SME?
TIA
-
just a curiosity: is SME's fdisk gpt compliant or not? is there any limitation in partition size?
Well I have used fdisk today to partition four 2TB drives which are assembled as a single LVM of 8TB (it is my backup array).
I am currently formating ext3 on top of the LVM at the moment... this will take a while. :-)
-
Well I have used fdisk today to partition four 2TB drives which are assembled as a single LVM of 8TB (it is my backup array).
I am currently formating ext3 on top of the LVM at the moment... this will take a while. :-)
you are talking about fdisk in SME, aren't you?
anyway, thank you :-)
-
you are talking about fdisk in SME, aren't you?
yes indeed.
-
Just as information for others as the above thread seems to suggest large Raid 5 arrays are slow. I have just completed a few exercises for my own needs and now sharing the results.
RAID5: 1.0TB x 5 disks (ie. 4.0TB of disk space)
-- mkfs was about 25 minutes
-- resync was about 10 hours
-- On a DAS connected with an eSATA (Muxed) cable to an si3132 based card
RAID1: 2.0TB x 2 disks (ie. 2.0TB of disk space)
-- mkfs was about 15 minutes
-- resync was about 5 hours
-- uses two SATA ports on the motherboard
Large LVM: 2.0TB x 4 disks (ie. 8.0TB of disk space which is my back up storage)
-- mkfs was about 36 hours
-- On a DAS connected with an eSATA (Muxed) cable to an si3132 based card
** weird part is while drives 1,3,4 formated slowly, drive 2 formated quickly.
Christian