Koozali.org: home of the SME Server

RAID possibilities - Linux Noob

Offline raem

  • *
  • 3,972
  • +4/-0
RAID possibilities - Linux Noob
« Reply #15 on: July 20, 2006, 03:37:39 AM »
p-jones

> ...I was able to achieve via Software Raid mode 5 and this really honks > along. If I loose any one of these three drives do I still hav full
> redundancy ?

There are many tutorials out there, I just picked the first one.
http://www.acnc.com/04_01_05.html
also see
http://www.acnc.com/04_01_06.html

I'm not sure how interactive/useful the drive config option is in the admin console for RAID 5 & 6, but that tool may simplify drive replacment.
I haven't used it yet.

>...add a fourth drive and create two stripped raids via hardware, it would > seem feasible then to create a mirrored software raid from each pair -

I think you are referring to RAID10. and yes that would be both very good redundancy & speed (due to striping patterns), at the price of the extra hard disks required though (which are cheap compared to any tech support costs re failures etc).
see
http://www.acnc.com/04_01_10.html


> Any pitfalls here (other than perhaps performance). this is not really a
> true raid 5 as I understand it but it would seem to offer the best of both > worlds with both stripping performance and mirrored redundancy

RAID10 as I understand it has good performance as the drives are striped in opposite directions, giving the fastest read time possible for the array.
You are back to a hardware based implementation (or perhaps a combination of hardware & software based implementation), as sme does not support software RAID10.

Please would an expert contribute to this thread, I'm at the edge of my knowledge zone.
...

Offline jonroberts

  • ****
  • 111
  • +0/-0
    • http://www.westcountrybusiness.com
RAID possibilities - Linux Noob
« Reply #16 on: July 20, 2006, 10:50:14 AM »
p-jones,

It's my understanding that a RAID5 array will tolerate a failure of 1 drive in the array, but not more.

I can't see any reason why you wouldn't be able to use your on-board RAID0 to create two arrays of 2 striped disks & then the SME software RAID1 to mirror these to give a RAID1-0 array.  The only pitfall I could see is if the motherboard fails (as you're using on-board RAID) you'd need to replace it with another with an identical (or compatible if you're lucky) RAID controller chipset.  For a failed drive, you'd need to rebuild the hardware array first (which probably means while server is down & can take a while) before being able to restart the server & letting the software RAID rebuild the mirror.

I'm not sure how much more resilience this gives you over RAID5 in practice.  Both tolerate a failure in a single drive.  I would expect RAID 1-0 would survive the faulure of 2 drives, but only if they were both in the same array.  I suspect that the failure of 2 drives (1 in each array) would cause each array to fail & so cause the mirrored array to fail.

However, I too am at the extent (& beyond) of my knowledge of this area & would be happy to here from anyone with more expertise.
......

Offline p-jones

  • *
  • 594
  • +0/-0
RAID possibilities - Linux Noob
« Reply #17 on: July 20, 2006, 12:59:04 PM »
Thanks for the comments guys. I look forward to more comments from the experts and will add some "real-life" findings as soon as I get a bit more time to continue playing.

Rgds
Peter
...

Offline raem

  • *
  • 3,972
  • +4/-0
RAID possibilities - Linux Noob
« Reply #18 on: July 20, 2006, 02:12:30 PM »
To all

What compelling reasons do people have to want to use RAID 5 or 6 instead of using software RAID1.
RAID1 storage will be limited to approx 500Gb with todays available drive capacity, but if you don't need storage greater than that, why complicate things with additional multiple drives, & additional things to fail.
I realise RAID 5 & 6 have their virtues etc, but in practice software RAID1 works OK for many situations.
Im aware of the old adage to split OS & data onto different drives, but I  cannot see a logical technical reason for needing to do that nowadays. Drives are fast , reasonably reliable & cheap, so I think the concept of putting data on faster "data only" drives has long passed.
There is no technical reason I'm aware of that stops the OS & data happily residing on the same drive.
...

icpix

Simplicity of real-life logistics
« Reply #19 on: July 20, 2006, 02:53:13 PM »
Ray----

<Im aware of the old adage to split OS & data onto different drives, but
I cannot see a logical technical reason for needing to do that nowadays.>

Simpler updates/upgrades. Particularly as SME6/7 will void every drive it
can reach on a new/clean install.

My SME7 box uses a pair of (s/w) RAID1 Raptors for the 'OS' with
additionally mounted some 2TB of (h/w) RAID5 data drives for 'data'
storage. With that amount of storage hanging about and few cogent places
to temporarily put it I have long appreciated diverse reasons for separating
OS & data. Typically I just pull out the hardware RAID card's SATA
connectors before attempting tricky-looking OS milestones.

I haven't yet had the courage or time to swap out my venerable SME6
box for my snazzy SME7-running monster but I have hopes soon enough.
Nevertheless despite running with a pair of 3.6GHz Xeons the daily ClamAV
run... well, it isn't 'daily' as it doesn't finish in less than 24hrs;~/ That's with
it in serveronly mode (offline) too;~| So another point for separating 'OS'
and 'data'. I don't need to virus check PSDs and JPGs on a daily basis... yet.

----best wishes, Robert