Also for anyone's intrest.....
Raid output of after clean install w/2 30gig drives on 7pre1
Be aware it may take some time to construct / reconstruct the raid array
after clean install.
Don't try to establish the raid right after an install on the console admin with
5. Manage disk redundancy
Raid array will construct itself in less than an hour (speed of sys) and you
should see
"All RAID devices are in clean state"
Current RAID status:
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
29904896 blocks [2/2] [UU]
md1 : active raid1 hdc1[1] hda1[0]
102208 blocks [2/2] [UU]
unused devices: <none>
All RAID devices are in clean state
cat /proc/partitions; cat /proc/mdstat
ARRAY /dev/md1 super-minor=1
major minor #blocks name
3 0 30015216 hda
3 1 102280 hda1
3 2 29912904 hda2
22 0 30015216 hdc
22 1 104391 hdc1
22 2 29904997 hdc2
9 1 102208 md1
9 2 29904896 md2
253 0 29327360 dm-0
253 1 524288 dm-1
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
29904896 blocks [2/2] [UU]
md1 : active raid1 hdc1[1] hda1[0]
102208 blocks [2/2] [UU]
unused devices: <none>
ARRAY /dev/md1 super-minor=1
A look inside mdadm.conf shows
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md2 super-minor=2
ARRAY /dev/md1 super-minor=1
On the command line a good look at
man mdadm should be in order.
and a look here also
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html#toc5http://www.bytepile.com/raid_class.php#02For CentOS
http://www.openfiler.com/about/