Hallelujah!!
I finally came to the conclusion I was looking for. Now the only question remains is for future sake, how would I rebuild my raid array if one drive fails? Anyway, this is what I have done, why I did it, and what I plan on doing with this information.
I needed to spread /dev/main/root across all available devices. I say devices because I will be separating across two different /dev/md(x) or (2) RAID1 configurations.
For my test, I have 3 single 40 gb hard drives in a
SME 7.5.1 setup. I installed SME 7.5.1 on a single 40gb disk, and from there I installed some hardware management contribs and AFFA 2. (
Just a side note, the Compress-bzip link is returning an error for wget in the AFFA wiki, so I had success using http://apt.sw.be/redhat/el4/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm). My next feat would be to successfully test AFFA on SME 7.5.1 backing up SME 8 Beta 7 and doing a restore, not a rise. However, the single 40gb hard drive is not enough, and mounting the other two to certain points would not allow AFFA to utilize the total capacity of ~120GB. So my journey comes to here.
fdisk /dev/hdc
type "n" for new
type "p" for primary
type "1" for the third partition
accept the default starting cylinder of 1
accept the default ending cylinder
now type "t" for type
then "fd" for Linux Raid
now type "w" to write out the partition table
Be sure to use Linux Raid Autodetect, because I tried just basic linux and the last portion of this code failed.
Next you must initialize the device to be available for use by LVM, and this is the part that I have been missing through this whole topic. This is not in the Raid Wiki where most thought it would be.
pvcreate /dev/hdc1
Once it is available for use with LVM you can extend the Volume Group onto this device. I believe this just sets the name to associate with the existing volume group.
vgextend main /dev/hdc1
Just a side note, you can use this code anytime during this process to double check your work and make sure you are on the right path, at least, that is what I did.
lvm pvscan
After extending LVM, you now have to find how much space is available for use.
vgdisplay
From experience, DO NOT try to utilize the space completely as it will round up. (i.e. - 38.16gb will round to 39gb and cause an error in the last piece of code. I tried, and it threw that error)
Once you see the available amount of "free space" next to Free PE
lvresize -L +xxGB /dev/main/root
xx denotes the location where you put the number of free space available
Lastly, resize your filesystem
ext2online -d -v /dev/main/root
That is it! You can
df -T
to see the amount of free space you have! I am not 100% on everything I did, but I have to give credit to ldkeen. Also if you mess up and pvscan reads out that it is missing a uuid
vgreduce --removemissing
That cleared my error from my initial format type that was wrong.
Please, let me know if any of this is wrong, but for me on a fresh, clean, non-production SME 7.5.1 server, it worked. I will now test on my SME 8b7 with two raid1 devices (a total of 4 hard drives)
[root@mtrosebackup ~]# lvm pvscan
PV /dev/md2 VG main lvm2 [37.16 GB / 0 free]
PV /dev/hdc1 VG main lvm2 [38.16 GB / 0 free]
PV /dev/hdd1 VG main lvm2 [37.25 GB / 1.41 GB free]
Total: 3 [112.56 GB] / in use: 3 [112.56 GB] / in no VG: 0 [0 ]
[root@mtrosebackup ~]# vgdisplay
--- Volume group ---
VG Name main
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 3
Act PV 3
VG Size 112.56 GB
PE Size 32.00 MB
Total PE 3602
Alloc PE / Size 3557 / 111.16 GB
Free PE / Size 45 / 1.41 GB
VG UUID 1Kyluo-Caox-Z8zo-QOtX-pQoK-1T9x-GRBLpB
[root@mtrosebackup ~]# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/mapper/main-root
ext3 114274928 19951400 88522904 19% /
/dev/md1 ext3 101018 10745 85057 12% /boot
none tmpfs 241672 0 241672 0% /dev/shm