Koozali.org: home of the SME Server
Obsolete Releases => SME 7.x Contribs => Topic started by: ldkeen on September 17, 2006, 12:20:53 PM
-
Add Extra Disk to LVM on RAID5 HOWTO
Author(s) Lloyd Keen, David Biczo
SCOPE
The purpose of this document is to describe the steps involved in adding an extra disk to a standard smeserver V7.0 installation with three disks already installed (/dev/md1 in raid1 and /dev/md2 in raid5). What we are going to do is actually merge the new disk into the already existing setup and resize the filesystems to make use of this new disk. You can grow a raid 5 in size using the SME Server disk, but the ability to “grow” a RAID5 array by adding extra disks requires some very recent versions of tools which aren't included as part of the base installation. The tools needed are mdadm v2.5.1, linux kernel 2.6.17 and LVM2. In order to complete the HOWTO you will need to download a LIVECD called FINNIX 88.0 which has the required tools from here: http://www.finnix.org/Download (Approximately 88MB)
This howto has been tested many times over on my machine without any problems, however YMMV. As usual all care taken but no responsibility accepted.
****MAKE SURE YOU HAVE BACKED UP YOUR DATA PRIOR TO ATTEMPTING THIS PROCEDURE****
Note: This procedure can be easily adapted to add an extra disk to a system with more than three disks installed.
REQUIREMENTS
A stock standard smeserver v7.0 installation with 3 x scsi disks located at /dev/sda /dev/sdb and /dev/sdc which are all members of the raid devices /dev/md1 (raid1) and /dev/md2 (raid5)
FINNIX 88.0 LIVECD
A fourth scsi Hard Disk of the same size as the three already installed.
FOR THE IMPATIENT
Shutdown, add extra disk and reboot from FINNIX 88.0
Partition extra disk the same as the others
#modprobe raid1
#modprobe raid5
#MAKEDEV md
#mdadm -A /dev/md1 /dev/sd[a,b,c]1
#mdadm -A /dev/md2 /dev/sd[a,b,c]2
#mdadm -a /dev/md1 /dev/sdd1
#mdadm -a /dev/md2 /dev/sdd2
#mdadm -G /dev/md1 -n4
#mdadm -G /dev/md2 -n4
WAIT FOR ARRAYS TO RESHAPE
#pvresize -v /dev/md2
#lvresize -L +68GB /dev/main/root (where 68GB is the free space on the PV)
Reboot from the hard disk and resize the filesystem.
#ext2online -d -v /dev/main/root
PROCEDURE
Shut down the server and install the fourth disk. Start the server and boot directly from the FINNIX LIVECD. Lets just confirm a few things prior to going ahead:
Current status of the RAID arrays:
#cat /proc/mdstat
Personalities : [raid1] [raid5]
md2 : active raid5 sdc2[2] sdb2[1] sda2[0]
143154688 blocks level 5, 256k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdc1[2] sdb1[1] sda1[0]
104320 blocks [3/3] [UUU]
unused devices: <none>
Current Disk Space:
#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root
133G 993M 125G 1% /
/dev/md1 99M 13M 82M 14% /boot
none 506M 0 506M 0% /dev/shm
OK lets start off by partitioning the fourth disk exactly the same as the other three. First we'll inspect the partitioning of one of the current drives:
[root@raid ~]# fdisk /dev/sda
The number of cylinders for this disk is set to 8924.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 8924 71577607+ fd Linux raid autodetect
Command (m for help): q
We can see that the drive has two partitions – partition 1 occupies from cylinder 1 – 13 and partition 2 occupies the rest of the space from cylinder 14 to the end (8924), both partitions are of type fd (Linux Raid Autodetect) and partition 1 is set active. So lets go ahead and partition our fourth disk exactly the same:
[root@raid ~]# fdisk /dev/sdd
The number of cylinders for this disk is set to 8924.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-8924, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-8924, default 8924): 13
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (14-8924, default 14):
Using default value 14
Last cylinder or +size or +sizeM or +sizeK (14-8924, default 8924):
Using default value 8924
Command (m for help): a
Partition number (1-4): 1
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
After writing the partition table confirm the settings are correct:
[root@raid ~]# fdisk /dev/sdd
The number of cylinders for this disk is set to 8924.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 * 1 13 104391 fd Linux raid autodetect
/dev/sdd2 14 8924 71577607+ fd Linux raid autodetect
Command (m for help): Q
OK now lets add the new disk to the existing arrays. From the command prompt enter the following:
#modprobe raid1
#modprobe raid5
#MAKEDEV md
Assemble the existing arrays:
#mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1
#mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2
Now add the new disk to the arrays:
#mdadm --add /dev/md1 /dev/sdd1
#mdadm --add /dev/md2 /dev/sdd2
OK, now lets “grow” the raid5 onto the new disk:
#mdadm --grow /dev/md1 --raid-devices=4
#mdadm --grow /dev/md2 --raid-devices=4
This will take some considerable time to reshape the raid – DO NOT reboot or shut down the server during this stage. You can check the status at any time by doing “#cat /proc/mdstat”. Once mdadm has finished reshaping the array we need to resize both the PhysicalVolume (PV) and the LogicalVolume (LV).
#pvresize -v /dev/md2
Now we need to resize the LV but before we do that lets have a look at how much free space we have to use:
#vgdisplay
--- Volume group ---
VG Name main
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 204.78 GB
PE Size 32.00 MB
Total PE 6553
Alloc PE / Size 4367 / 136.47 GB
Free PE / Size 2186 / 68.31 GB
VG UUID Btsy6N-bwJ8-koAc-UKVh-M5wm-9mBD-JIvPU2
You can see from the above that we now have 68.31GB Free to use, so lets go ahead and resize the LV making use of the extra 68GB:
#lvresize -L +68GB /dev/main/root
It is safe to reboot the server now but we still have to resize the ext3 filesystem. Shutdown, remove the FINNIX CD and boot into smeserver then perform the following from the command prompt:
#ext2online -d -v /dev/main/root
Finally lets check to make sure that we can see the extra space:
#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root
200G 996M 189G 1% /
/dev/md1 99M 13M 82M 14% /boot
none 506M 0 506M 0% /dev/shm
Best Regards
Lloyd
-
You wouldn't happen to know how to convert the Linux Raid autodetect partition back to something like an ext2 partition would you? I'm trying to expand the partition, and am having trouble finding a tool that will resize the Linux Raid partition. I'm running under VMWare on a Win2K server machine that is already RAID 5, so I don't need the raid within the VM, but I can't find a tool to resize partition as it is. I've tried Acronis Disk Director Suite and the GNU Parted bootable CD, both of which don't like the x'FD' partition type.
mudtoe
-
@ldkeen
Thanks for this. I'm sure many will benefit from your work. Keep up the great documentation.
@mudtoe
"You wouldn't happen to know how to convert the Linux Raid autodetect partition back to something like an ext2 partition would you?"
#CHANGE ALL PARTITIONS TYPE TO LINUX
-boot from KNOPPIX
-open terminal
-su
-fdisk /dev/hda
-t <to change type>
-L <to see hex codes for different FS types>
-83 (i think this is linux, but check in the list to be certain)
-w
-q
-exit
-exit
-reboot
-
Thanks for the response. However, I'm having some trouble. Here is what I got when I tried what you suggested:
(http://home.fuse.net/e2d9e6/temp/screen.jpg)
Not sure what to do next.
mudtoe
-
Try "fdisk /dev/sda"
NOT "fdisk /dev/sda1"
Also, this may cause you problems, as the mirror array has already been created and is expecting linux raid partitions. You would probably be better off re-installing SME and telling it not to use RAID/LVM during the install.
How do I do that?
Boot from the SME CD while holding down the shift key. At the LILO boot prompt, type "linux text partition". During the install, you will be presented with screen to tweak your disk layout. Modify the layout to remove the RAID/LVM and just use standard disk volumes.
You can also change the sizes for your swap, boot and root partition while your at it!
No need to thank me ;-P
-
I just got around to trying this again, and as you thought, changing the partition types kept it from booting correctly (it couldn't find volume group "main").
I guess I'm going to have to reinstall, but how do I backup everything I've done to date from within the VM? The VM disk is larger than 4gb now, and from what I read, 4gb is the largest that can be backed up through the browser. Is there a linux tool that's akin to windows backup, that I can use to backup the whole SME Server installation to an SMB share? That way I can restore it all after I do a reinstall.
P.S. Thanks for your assistance to date.
mudtoe
-
Have a look at Darrel May's Backup2 contrib. It should be able to backup to a share.
-
I'm looking at that now. Just one other question. If I backup the VM, then do a reinstall, followed by a restore, what directories and/or files do I skip on the restore, so that I don't restore the linux O/S drivers and settings, which on the backup would be configured to expect the raid partitions? I already know that the existing configuration won't boot if the partitions are simply changed to be non-raid (that's what I just tried).
This would be difficult in the windows world because they foolishly put both application and system settings in the registry. One would have to do a "repair install" over the top of the existing windows installation in order to leave the applications alone, but force a redetection of the hardware configuration. If the linux system drivers and settings are in completely separate locations from the application stuff, it should be possible to do it with just a backup, fresh install, and then selective restore; or if there is some way to do the equivalent of the "repair install" in the linux world.
mudtoe
-
Mudtoe,
Let me get this straight. You are running a win2k server with a raid5 setup as the host machine which is running an smeserver virtual machine as a guest of the win2k box and you would like to resize the smeserver virtual machine. Is that correct??
If so, then the following should do it:
Increase the size of the virtual machine:
#vmware-vdiskmanager -x 10GB myDisk.vmdk
Resize the Physical Volume as per the howto:
pvresize -v /dev/md2
Resize the Logical Volume:
lvresize -L +(free space) /dev/main/root
Resize the ext3 filesystem:
ext2online -d -v /dev/main/root
I haven't tested this, so it'd be good if you could try it out and report back.
Regards Lloyd
-
You are correct in your assumptions about my environment. I'm running win2k server with it's own raid 5. All I really want to do is expand the size of SME server's disk files. I was going the route of changing the partition type to regular linux so that I could use the regular tools from within the VM (partition magic or acronis disk director) to expand the size of the SME server partion. However, if it's easier to expand the size without removing the raid designation, even though it's unnecessary within the VM, then that's acceptable.
I tried what you suggested, but had some problems. It couldn't find the /dev/md2. I saw from the first post in the series that you were doing something that looked like it was associating the name /dev/mdx with the "real" drives (/dev/sda). I tried doing some of what you had at the start in order to establish the association, and although I did get the mda's to exist I wasn't able to expand the thing, and was left with something that wasn't bootable (I had backed up the files outside the VM first, so no harm done). I also wasn't able to divine the purpose of some of the commands you had up front, such as the modprobe's, so that's probably how I got goofed up (although I'm a techie, I'm not a linnux person, so I didn't know the purpose of everything you had there). If you can walk me through the right sequence of commands, including all the prerequisite mappings, I think this could work. From my earlier post you can see how my device mappings look.
Also, is there a way to get it to show me how much free space exists on the physical drive (i.e. not assigned to a partition)? I had already done the vmware disk management thing and saw via partition magic and acronis disk director that the space was added to the disk; it's just that those utilities don't like the linix raid partition type, so they won't resize it to use the extra space. I got stuck when I tried to do the pvresize, as it wouldn't accept any reasonable value I put in for growth. I was trying to go from 20GB to 100GB, so I tried putting in +80GB which it said wasn't available. I tried a couple of values less than that, including +60GB which I knew had to be available, but it wouldn't take that either, so at that point I gave up, figuring that I'd done something wrong. However it would be nice to get a figure for the free space from a linux tool, just to verify that it's seeing the extra space the same way as the stand alone partition utilities do.
Oh, and I'm assuming that this was all done from the stand alone Finnix CD, which I downloaded and booted from. If not, then that's the problem.
Assistance appreciated.
mudtoe
-
Mudtoe
Can you post the output of:
#cat /proc/mdstat
#df -h
#pvdisplay
#lvdisplay
Lloyd
-
Here is what I got:
login as: root
root@172.26.199.252's password:
Last login: Tue Oct 17 10:55:13 2006
[root@srw-smeserver ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda2[0]
20860288 blocks [2/1] [U_]
md1 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
unused devices: <none>
[root@srw-smeserver ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root
20G 3.0G 16G 17% /
/dev/md1 99M 19M 75M 21% /boot
none 157M 0 157M 0% /dev/shm
[root@srw-smeserver ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name main
PV Size 19.88 GB / not usable 0
Allocatable yes
PE Size (KByte) 32768
Total PE 636
Free PE 1
Allocated PE 635
PV UUID 1qQ6Tk-pC59-L661-FkeI-B8Pi-rH6D-GeLzTz
[root@srw-smeserver ~]# lvdisplay
--- Logical volume ---
LV Name /dev/main/root
VG Name main
LV UUID kFiXCV-23jG-6RJ6-PzTv-Q1rv-GutR-eJZosL
LV Write Access read/write
LV Status available
# open 1
LV Size 19.34 GB
Current LE 619
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/main/swap
VG Name main
LV UUID DbEivz-MPkK-RIJO-7OMg-4k6b-St64-Ke6yp3
LV Write Access read/write
LV Status available
# open 1
LV Size 512.00 MB
Current LE 16
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
[root@srw-smeserver ~]#
Also, here's a screen from my partitioning tool showing the configuration of the VM drive:
(http://home.fuse.net/e2d9e6/temp/Screen3.jpg)
mudtoe
-
Mudtoe,
I'm assuming that this was all done from the stand alone Finnix CD
No. You don't need the Finnix CD at all. The only reason that I required it was that I needed to grow a raid 5 onto a new disk. You've already increased the size of your VM.
Try booting directly from the smeserver VM and try the procedure again. But prior to resizing the LogicalVolume you will need to do a vgdisplay to find the free space. So it should be:
Resize the vm (already done)
#pvresize -v /dev/md2
#vgdisplay (to find free space)
#lvresize -L +80GB /dev/main/root (where 80GB is the free space listed above)
#ext2online -d -v /dev/main/root
Lloyd
-
For some reason it's not finding the free space. Here is what I got:
login as: root
root@172.26.199.252's password:
Last login: Tue Oct 17 12:49:31 2006
[root@srw-smeserver ~]# pvresize -v /dev/md2
Using physical volume(s) on command line
Archiving volume group "main" metadata (seqno 3).
No change to size of physical volume /dev/md2.
Resizing volume "/dev/md2" to 41720192 sectors.
Updating physical volume "/dev/md2"
Creating volume group backup "/etc/lvm/backup/main" (seqno 4).
Physical volume "/dev/md2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
[root@srw-smeserver ~]# vgdisplay
--- Volume group ---
VG Name main
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 19.88 GB
PE Size 32.00 MB
Total PE 636
Alloc PE / Size 635 / 19.84 GB
Free PE / Size 1 / 32.00 MB
VG UUID lJsly1-18dt-ut4B-PkpZ-hCmo-sp08-F2FVOi
[root@srw-smeserver ~]# lvresize -L +80GB /dev/main/root
Extending logical volume root to 99.34 GB
Insufficient suitable allocatable extents for logical volume root: 2559 more required
[root@srw-smeserver ~]#
I tried putting in the resize command anyway, but as I expected I received an error. I'm assuming that the free space was the "Free PE / Size" field, which showed only 1 unit (not sure if it's blocks, tracks, cyls, clusters, or what). I'm going to delete this VM and put everything back the way it was before the next try, just in case having the first command work, but not the second, causes something odd to happen to the raid configuration.
mudtoe
-
Mudtoe,
Can you do a printout of /dev/sda using fdisk.
-
Here it is:
[root@srw-smeserver ~]# fdisk -l
Disk /dev/sda: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 2610 20860402+ fd Linux raid autodetect
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 21.3 GB, 21360934912 bytes
2 heads, 4 sectors/track, 5215072 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
[root@srw-smeserver ~]#
-
OK, follow the steps below:
Partition the free space:
#fdisk /dev/sda
type "n" for new
type "p" for primary
type "3" for the third partition
accept the default starting cylinder of 2611
accept the default ending cylinder of 13054
now type "t" for type
enter "3" for the third partiton
then "fd" for Linux Raid
now type "w" to write out the partition table
Initialise the new Physical Volume for use with LVM
#pvcreate /dev/sda3
Extend the existing Volume Group onto the new PV
#vgextend main /dev/sda3
Find how much free space you have
#vgdisplay
Resize the Logical Volume using the free space
#lvresize -L +80GB /dev/main/root
Resize the filesystem
#ext2online -d -v /dev/main/root
This is all untested - so please make sure you have a backup. Let me know how you go.
Lloyd
-
:D
Your procedure worked. The only thing that had to be changed was that a reboot needed to be peformed after the "fdisk" and before the "pvcreate". The fdisk put out a message saying that the kernel wouldn't use the new partition table until after a reboot, and even though "fdisk -l" showed the new partition at that point, the pvcreate said that the partition didn't exist. After a reboot the pvcreate and the rest of the sequence worked just fine.
I do have a couple of remaining questions concerning the resulting configuration. First, when I do a "fdisk -l" I get some messages about partition tables being invalid. I'm not sure if it means anything or not, as I'm not sure if what the fdisk is talking about is a real partition or some artifact of the raid setup. Also the "pvdisplay" shows something a little odd in that one volume is /dev/md2 and the other is /dev/sda3 (I understand what sd type partitions are (SCSI), but not the md ones). I'm including a list of what things look like now:
login as: root
root@172.26.199.252's password:
Last login: Thu Oct 19 00:28:25 2006
[root@srw-smeserver ~]# fdisk -l
Disk /dev/sda: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 2610 20860402+ fd Linux raid autodetect
/dev/sda3 2611 13054 83891430 fd Linux raid autodetect
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 21.3 GB, 21360934912 bytes
2 heads, 4 sectors/track, 5215072 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
[root@srw-smeserver ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root
98G 3.1G 90G 4% /
/dev/md1 99M 19M 75M 21% /boot
none 157M 0 157M 0% /dev/shm
[root@srw-smeserver ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name main
PV Size 19.88 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 636
Free PE 0
Allocated PE 636
PV UUID 1qQ6Tk-pC59-L661-FkeI-B8Pi-rH6D-GeLzTz
--- Physical volume ---
PV Name /dev/sda3
VG Name main
PV Size 80.00 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 2560
Free PE 0
Allocated PE 2560
PV UUID ZeWoKc-Ld32-t2Nf-QNEq-NbRi-5BTU-o2Bma0
[root@srw-smeserver ~]# lvdisplay
--- Logical volume ---
LV Name /dev/main/root
VG Name main
LV UUID kFiXCV-23jG-6RJ6-PzTv-Q1rv-GutR-eJZosL
LV Write Access read/write
LV Status available
# open 1
LV Size 99.38 GB
Current LE 3180
Segments 3
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/main/swap
VG Name main
LV UUID DbEivz-MPkK-RIJO-7OMg-4k6b-St64-Ke6yp3
LV Write Access read/write
LV Status available
# open 1
LV Size 512.00 MB
Current LE 16
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
[root@srw-smeserver ~]#
It might be nice to package up the final procedure from this effort and publish it along with your other procedure for adding volumes to the raid configuration. I'd say this thread just became the definitive source for doing disk reorganization on an SME Server installation.
Once again, thanks much for your assistance.
mudtoe
-
Mudtoe
First, when I do a "fdisk -l" I get some messages about partition tables being invalid
Are you talking about this message:
Disk /dev/md1 doesn't contain a valid partition table
If so, that is a normal message with software raid. I not exactly sure but I think only physical disks contain a partition table or the partiton table for the metadevice is stored in software. Either way it's nothing to worry about.
"pvdisplay" shows something a little odd in that one volume is /dev/md2 and the other is /dev/sda3
What would have been the best way to go would have been to simply resize /dev/md2, as it stands we have created a new partition and extended the logical volume onto this. I'm in the process of setting up a vm to have a bit more of a play with it. Over the next week or so I'll post some more info. That's the beauty of LVM, there's a few ways to achieve the end result.
Lloyd
-
Yes, those were the messages I was referring to. What exactly is the /dev/mdx device type? I know that /dev/hdx refers to ide hard drives, and /dev/sdx are SCSI drives, but the mdx type is new to me, although I'm not a linux expert by any means. It also seems a bit odd to see the types mixed like they show on the pvdisplay.
I'm assuming that there really isn't any problem (e.g. performance, stablilty, etc.) with the way I have it configured (binding two partitions together in the configuration), versus having expanded the md2 partition, other than just general neatness of the configuration. As I'm the only person fooling with this, it won't be an issue down the road with regard to someone else trying to figure out why the partition configuration is the way it is.
mudtoe
-
What exactly is the /dev/mdx device type?
md is short for metadevice and is the terminolgy that is used when combining one or more physical drives to create a virtual drive (or metadevice). If you combine 2 x 20Gb disks (which could be /dev/sda1 and /dev/sdc1) to form a raid 1 mirror, that virtual disk would be referred to as /dev/md1. The next raid device would be md2 and so on.
I'm assuming that there really isn't any problem...other than just general neatness
That's about it in a nutshell.
Lloyd
-
Mudtoe,
the best way to go would have been to simply resize /dev/md2
After some further testing I've found out that the metadevice can be resized using the following:
#mdadm --grow --size=max /dev/md2
So the more elegant way of doing it would have been:
Resize the vm (already done)
Boot from the Finnix CD or sme rescue but don't mount the filesystem, delete partition sda2 and recreate using free space (this doesn't destroy the data, it just updates the partition table with info about the new free space)
Reboot into smeserver
Resize the metadevice
#mdadm -G -z max /dev/md2
#pvresize -v /dev/md2
#vgdisplay (to find free space)
#lvresize -L +80GB /dev/main/root (where 80GB is the free space listed above)
#ext2online -d -v /dev/main/root
Note, I've only partially tested this. I'll fully test this soon and report back.
Regards, Lloyd
-
What exactly is the /dev/mdx device type?
md is short for metadevice and is the terminolgy that is used when combining one or more physical drives to create a virtual drive (or metadevice). If you combine 2 x 20Gb disks (which could be /dev/sda1 and /dev/sdc1) to form a raid 1 mirror, that virtual disk would be referred to as /dev/md1. The next raid device would be md2 and so on.
I'm assuming that there really isn't any problem...other than just general neatness
That's about it in a nutshell.
Lloyd
I could be wrong but, as a learning exercise....
1. In general /dev/mdx is a meta device. Specifically in the stock SME7, the /dev/mdx are raid drives.
2. SME7 builds LVM drives on top of the raid drives (Even if there is only one HD)
So to look at the SME7 setup - single HD
1. The installer builds 2, RAID 1 partitions /dev/md1, and /dev/md2 in degraded mode since there is only 1 hd.
2. /dev/md1 is the boot partition and is mounted on /boot
3. /dev/md2 is used as a pv for the LVM
4. Two logical volumes are built on the pv (/dev/main/swap and /dev/main/root)
Therefore, showing /dev/md2 and /dev/sda3 as the physical volumes in the pvdisplay says that the original is a raid device (/dev/md2) and the new drive (/dev/sda3) is not a raid device.
Please feel free to correct anything in this post...
ed
-
Mudtoe,
the best way to go would have been to simply resize /dev/md2
After some further testing I've found out that the metadevice can be resized using the following:
#mdadm --grow --size=max /dev/md2
So the more elegant way of doing it would have been:
Resize the vm (already done)
Boot from the Finnix CD or sme rescue but don't mount the filesystem, delete partition sda2 and recreate using free space (this doesn't destroy the data, it just updates the partition table with info about the new free space)
Reboot into smeserver
Resize the metadevice
#mdadm -G -z max /dev/md2
#pvresize -v /dev/md2
#vgdisplay (to find free space)
#lvresize -L +80GB /dev/main/root (where 80GB is the free space listed above)
#ext2online -d -v /dev/main/root
Note, I've only partially tested this. I'll fully test this soon and report back.
Regards, Lloyd
I've tested this, since I wanted to place a larger harddisk instead of the old one.
I use a single disk.
I did:
Boot from SME server CD, skip media test, leave it there, do nothing else here.
Goto other console using: CTRL ALT F2
#fdisk /dev/hda
d to delete partition, then 2 for /dev/hda2
n for new partition, 1 for primary partition (not sure of number), partition number 2.
c to change partition type to fd for linux raid.
Then reboot into SME server.
#pvresize -v /dev/hda2
#vgdisplay (to find free space)
#lvresize -L +80GB /dev/main/root (where 80GB is the free space listed above)
#ext2online -d -v /dev/main/root
This worked fine on my machine, thanks everyone for the help this topic provided for me, it solved my problem I was trying to solve for a couple of days. :D
-
Thanks for the finnux link. That fixed me.
For those that have a raid 5 array hanging off ibays, here is a method to grow that array with a new disk.
1. boot from the finnux disk you have already made.
2. cat /proc/mdstat check that the target array is stable. If it's rebuilding etc, wait untill it's done.
3. mdadm /dev/mdx -a /dev/sdx1 (Add the new disk as a spare raid member)
4. mdadm --grow /dev/mdx --raid-disks=x (This grows the number of disks to x )
5. Wait several hours ... cat /proc/mdstat for an update
6. e2fsck -f /dev/mdx
7. resize2fs -p /dev/mdx
Woo hoo /dev/mdx is now bigger.