Add Extra Disk to LVM on RAID5 HOWTOAuthor(s) Lloyd Keen, David Biczo SCOPE The purpose of this document is to describe the steps involved in adding an extra disk to a standard smeserver V7.0 installation with three disks already installed (/dev/md1 in raid1 and /dev/md2 in raid5). What we are going to do is actually merge the new disk into the already existing setup and resize the filesystems to make use of this new disk. You can grow a raid 5 in size using the SME Server disk, but the ability to “grow” a RAID5 array by adding extra disks requires some very recent versions of tools which aren't included as part of the base installation. The tools needed are mdadm v2.5.1, linux kernel 2.6.17 and LVM2. In order to complete the HOWTO you will need to download a LIVECD called FINNIX 88.0 which has the required tools from here:
http://www.finnix.org/Download (Approximately 88MB)
This howto has been tested many times over on my machine without any problems, however YMMV. As usual all care taken but no responsibility accepted.
****MAKE SURE YOU HAVE BACKED UP YOUR DATA PRIOR TO ATTEMPTING THIS PROCEDURE****Note: This procedure can be easily adapted to add an extra disk to a system with more than three disks installed.
REQUIREMENTS A stock standard smeserver v7.0 installation with 3 x scsi disks located at /dev/sda /dev/sdb and /dev/sdc which are all members of the raid devices /dev/md1 (raid1) and /dev/md2 (raid5)
FINNIX 88.0 LIVECD
A fourth scsi Hard Disk of the same size as the three already installed.
FOR THE IMPATIENTShutdown, add extra disk and reboot from FINNIX 88.0
Partition extra disk the same as the others
#modprobe raid1
#modprobe raid5
#MAKEDEV md
#mdadm -A /dev/md1 /dev/sd[a,b,c]1
#mdadm -A /dev/md2 /dev/sd[a,b,c]2
#mdadm -a /dev/md1 /dev/sdd1
#mdadm -a /dev/md2 /dev/sdd2
#mdadm -G /dev/md1 -n4
#mdadm -G /dev/md2 -n4
WAIT FOR ARRAYS TO RESHAPE
#pvresize -v /dev/md2
#lvresize -L +68GB /dev/main/root (where 68GB is the free space on the PV)
Reboot from the hard disk and resize the filesystem.
#ext2online -d -v /dev/main/root
PROCEDUREShut down the server and install the fourth disk. Start the server and boot directly from the FINNIX LIVECD. Lets just confirm a few things prior to going ahead:
Current status of the RAID arrays:
#cat /proc/mdstat
Personalities : [raid1] [raid5]
md2 : active raid5 sdc2[2] sdb2[1] sda2[0]
143154688 blocks level 5, 256k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdc1[2] sdb1[1] sda1[0]
104320 blocks [3/3] [UUU]
unused devices: <none>
Current Disk Space:
#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root
133G 993M 125G 1% /
/dev/md1 99M 13M 82M 14% /boot
none 506M 0 506M 0% /dev/shm
OK lets start off by partitioning the fourth disk exactly the same as the other three. First we'll inspect the partitioning of one of the current drives:
[root@raid ~]# fdisk /dev/sda
The number of cylinders for this disk is set to 8924.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 8924 71577607+ fd Linux raid autodetect
Command (m for help): q
We can see that the drive has two partitions – partition 1 occupies from cylinder 1 – 13 and partition 2 occupies the rest of the space from cylinder 14 to the end (8924), both partitions are of type fd (Linux Raid Autodetect) and partition 1 is set active. So lets go ahead and partition our fourth disk exactly the same:
[root@raid ~]# fdisk /dev/sdd
The number of cylinders for this disk is set to 8924.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-8924, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-8924, default 8924): 13
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (14-8924, default 14):
Using default value 14
Last cylinder or +size or +sizeM or +sizeK (14-8924, default 8924):
Using default value 8924
Command (m for help): a
Partition number (1-4): 1
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
After writing the partition table confirm the settings are correct:
[root@raid ~]# fdisk /dev/sdd
The number of cylinders for this disk is set to 8924.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 * 1 13 104391 fd Linux raid autodetect
/dev/sdd2 14 8924 71577607+ fd Linux raid autodetect
Command (m for help): Q
OK now lets add the new disk to the existing arrays. From the command prompt enter the following:
#modprobe raid1
#modprobe raid5
#MAKEDEV md
Assemble the existing arrays:
#mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1
#mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2
Now add the new disk to the arrays:
#mdadm --add /dev/md1 /dev/sdd1
#mdadm --add /dev/md2 /dev/sdd2
OK, now lets “grow” the raid5 onto the new disk:
#mdadm --grow /dev/md1 --raid-devices=4
#mdadm --grow /dev/md2 --raid-devices=4
This will take some considerable time to reshape the raid – DO NOT reboot or shut down the server during this stage. You can check the status at any time by doing “#cat /proc/mdstat”. Once mdadm has finished reshaping the array we need to resize both the PhysicalVolume (PV) and the LogicalVolume (LV).
#pvresize -v /dev/md2
Now we need to resize the LV but before we do that lets have a look at how much free space we have to use:
#vgdisplay
--- Volume group ---
VG Name main
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 204.78 GB
PE Size 32.00 MB
Total PE 6553
Alloc PE / Size 4367 / 136.47 GB
Free PE / Size 2186 / 68.31 GB
VG UUID Btsy6N-bwJ8-koAc-UKVh-M5wm-9mBD-JIvPU2
You can see from the above that we now have 68.31GB Free to use, so lets go ahead and resize the LV making use of the extra 68GB:
#lvresize -L +68GB /dev/main/root
It is safe to reboot the server now but we still have to resize the ext3 filesystem. Shutdown, remove the FINNIX CD and boot into smeserver then perform the following from the command prompt:
#ext2online -d -v /dev/main/root
Finally lets check to make sure that we can see the extra space:
#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root
200G 996M 189G 1% /
/dev/md1 99M 13M 82M 14% /boot
none 506M 0 506M 0% /dev/shm
Best Regards
Lloyd