Hi,
You're willing to add an existing raid 5(or create it, see the bottom of this message) array to a fresh installed sme server 7 ?
(note : i guess it can be easily adapted for other raid level)Here is how I did it. I still have some errors at startup but the array is functionnal.
I believe the errors root's is my motherboard which seems to have some issues with IRQ handling.
So If you test this howto whitout the errors discribed on page 2 of this post, let me know.
my purpose :
Separate the system from the data because :
1/ SME Server is really fast to reinstall, its configuration can be written on the back of an envelope.
2/ Reinstalling the SME needs to erase all drives, and I don't want to lose my 688GB potentially written on the raid array
3/ I often tries different system, because I'm curious so I care about not to have to delete all data on the Raid (back it up is cumbersome).
4/ I did have an existing RAID 5 array with a lot of data on it, but a first SME installation try has errased it. Fortunately, I made a backup Cumbersome : more than 350GB to save on many windows disk : that is precisely what I want to avoid
Starting point : So I've a SME server 7 installed and working... with an unassembled raid 5 array with data on it and its drives unplugged.
I've done the installation without the drives involved in the RAID 5 I want to mount into the sme.
WARNING :If the drives are plugged while installing the SME, the content is errased and a small partition is created for raid1 mirroring /boot
Here is my configuration :
/dev/hda : system, 120Gb
/dev/sda |
/dev/sdb |
/dev/sdc |=> 4x 250Gb sata, existing raid 5 with data
/dev/sdd |
After the installation, I've activated ssh remote access (more practical).
here is how /dev/hda is partitionned :
/dev/md1 : /boot (hda2)
/dev/md2 : / (hda1)
(later i can add another disk for mirroring the system disk, but currently i don't own another 120GB drive)
Now let's start the Raid 5 configuration :
Shutdown your machine :shutdown -h now
Plugin your drives.Restart the system. Having a screen at boot time is practical to see if no error occurs at bootstrap. you can still see what happened with
dmesg commande
Edit /etc/mdadm.conf(the toughest job of this howto)
Modify mdadm.conf so that md0 is assembled automatically at bootstrap
#declare each partition used in raid systems
DEVICE /dev/hda1 /dev/hda2 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
MAILADDR root
#explicitly define each array
ARRAY /dev/md2 level=raid1 num-devices=2 devices=/dev/hda2 UUID=d7e61b0e:cf9a9687:86080da8:d2a37641
ARRAY /dev/md1 level=raid1 num-devices=2 devices=/dev/hda1 UUID=ac607e7c:a8eb1a88:239a9b3d:d53b617f
ARRAY /dev/md0 level=raid5 num-devices=4 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1 UUID=6d750b1d:592a13a5:4bf373ba:02c6ed57
Here I totally change the way it was configured based on an article I found here :
http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html?page=1As the author of this article says, the advantage of explicitly defining the raid in mdadm.conf is that it's easy to know which partition is used on which raid. whereas with entry like this :
ARRAY /dev/md1 super-minor=1
you don't know which partition is involved in /dev/md1 and have examine (ex:mdadm -E /dev/sda1) each drive to know how raid are configured if the raid is not assemble.
(of course if it's assemble, you can find out the configuration with mdadm --detail /dev/md1, and also on SME md1 and md2 are build with respectively the first partition and the second partition of each drive found at installation time, but additionnal array won't respect this naming convention)
Explanation : DEVICE : list all partition involved in any RAID you want to use. so you liste partition for the system /dev/hda1 & 2 and the partition of your raid array as well.
MAILADDR : the mail address where alert should be sent on raid failure
ARRAY : defines the Raid ARRAY
The two first array are the SME default RAID 1 array.
use mdadm --detail /dev/md1 & mdadm --detail /dev/md2 to fill the two first ARRAY
first argument defines the superminor number
the second, the raid level, raid1 for mirroring, raid0 for stripping, raid5 distributed mirroring.
The third defines the number of device involved in the raid. For the raid1, 2 is set so that if you add a similar drive, SME server theorically mirror the system disk. (I've not tryed)
The fourth defines the partition involved in the array
The last one define the unviersal unique identifier of the array.
So for my RAID 5 with 4 disk, I've set raid5 level, 4 drives (1 spare + 3 working drives), and it's identifier.
To get the identifier of your existing RAID5 array try this command :
mdadm -E /dev/sda1 The 3rd line of the command output gives you the UUID of your RAID5 ARRAY.
Once you've configured mdadm.conf,
save it.and try this :
mdadm -A /dev/md0it should automatically assemble your existing RAID5.
Then you can mount it to check that your data are still there :op
mkdir /mnt/data
mount /dev/md0 /mnt/data
ll /mnt/data
Then we need to configure the fstab file so that md0 is mounted automatically at boot time.
Add entry to fstab with vi (there maybe a cleaner way to edit this file, but I don't know how to yet...)
/dev/md0 /home/e-smith/files ext3 usrquota,grpquota 1 1
You can actually do that, because the template that generate this file use the existing file /etc/fstab as a source (check /etc/e-smith/templates/fstab/template-begins)
So I choose to mount my raid5 array named
/dev/md0 in
/home/e-smith/files with
ext3 filesystem,
quota activated for user and group.
The two last parameters should be equals to 1 (the fisrt is to tell dump utility to try to save this filesystem, the second tells fsck that this partition has to be check regularly.)
As I choosed to automatically mount in /home/e-smith/files, I have to move existing files that lies in /home/e-smith/files, on the RAID5 array so that the SME server retrieve its primary ibay at the next boot:
Mount the Raid array in a temp directory (if not mounted) and do the move (ie : move the Primary ibay)
mkdir /mnt/data
mount /dev/md0 /mnt/data
mv /home/e-smith/files/* /mnt/data
Now you should move all you existing data into one directory and after raid5 setup, move your files into ibays you'll create.
So now you should restart your system. If there is error in fstab, you will be prompt at startup to boot in single user mode (ie only root can do something)
and you'll have to correct fstab, or comment the line you've added and then search why it's wrong.
Double check your configuration file before rebooting 
And it should work.
You should see SME files in /home/e-smith/files/ + your data.
It worked for me
Additionnal information : If your existing array had quota information, you should delete it and recreate quota informations as it may be inaccurate (wrong user/group reference on the new SME system)
to delete it, I think deleting files aquota.group & aquota.user should be enough.
To create quota data : quotacheck -cugm /dev/md0
If you want to create a new RAID5 (instead of assembling an existing one) here is what you should do :
Create the partition on your disks (delete any previous partition with d command, provided you saved the data on it

) :
fdisk /dev/sda
n (creation of a partition)
p (primary one)
1 (Partion 1)
[Return] (partition start, use default : start of the disk)
[Return] (partition end , use default : end of the disk)
w (write the change, nothing is changed on the drive until you type the w command)
the same for sdb, sdc, sdd
raid5 creation mdadm -Cv /dev/md0 -l5 -n4 -c128 /dev/sd{a,b,c,d}1
n : number of drive
l : level (5 for RAID5)
c : chunk size, 128 is good
/dev/sd{a,b,c,d}1 : drives involved in the array. it's a short version of /dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
monitor the progress of the creation with : cat /proc/mdstat
or
mdadm --detail /dev/md0
Format the filesystem : mke2fs -j /dev/md0
Activate quota on /dev/md0quotacheck -cugm /dev/md0
And now you can follow the guideline above.
Hope this helps you. feedback appreciated

Paquerette.