Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: Michail Pappas on July 13, 2011, 11:37:17 AM
-
Hello all,
this is a 7.5.1 system that originally had a single 240Gb hard disk. Two 500Gb hard disks were bought for this purpose. This is what I did.
1) I connected only one of these two disks and followed the instructions in
http://wiki.contribs.org/Raid#Adding_another_Hard_Drive_Later
2) After the array was in sync, I shutted down the system, removed the 240Gb (old) disk and added the 2nd (new) 500Gb disk.
3) Again I followed the instructions in http://wiki.contribs.org/Raid#Adding_another_Hard_Drive_Later
4) After the disks went in sync, I tried to follow the instructions in http://wiki.contribs.org/Raid#Upgrading_the_Hard_Drive_Size
The "mdadm --grow /dev/md2 --size=max" command runs without reporting anything. Running the pvresize commands reports that "1 physical volume(s) resized / 0 physical volume(s) not resized". However, the lvresize command says that "New size (7387 extents) matches existing size (7387 extents)". Additionally looking at the disks nothing seems to have changed regarding their sizes.
Perhaps the issue is that the partitions created on the first 500Gb disk (sdb here) installed were the same in size with the original 250Gb:
# fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104384+ fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 13 60801 488279647 fd Linux raid autodetect
Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 fd Linux raid autodetect
/dev/sdb2 14 30401 244091610 fd Linux raid autodetect
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 249.9 GB, 249949716480 bytes
2 heads, 4 sectors/track, 61022880 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/dm-0: 247.8 GB, 247866589184 bytes
2 heads, 4 sectors/track, 60514304 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 2080 MB, 2080374784 bytes
2 heads, 4 sectors/track, 507904 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
If this is the case, how can I correct it? FYI, output of cat /proc/mdstat / lvdisplay/pvdisplay/vgdisplay follows:
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[0] sda2[1]
244091520 blocks [2/2] [UU]
md1 : active raid1 sdb1[0] sda1[1]
104320 blocks [2/2] [UU]
unused devices: <none>
# vgdisplay
--- Volume group ---
VG Name main
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 232.78 GB
PE Size 32.00 MB
Total PE 7449
Alloc PE / Size 7449 / 232.78 GB
Free PE / Size 0 / 0
# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name main
PV Size 232.78 GB / not usable 2.44 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 7449
Free PE 0
Allocated PE 7449
# lvdisplay
--- Logical volume ---
LV Name /dev/main/root
VG Name main
LV UUID xxxxxxxxxxxxxxxxxxx
LV Write Access read/write
LV Status available
# open 1
LV Size 230.84 GB
Current LE 7387
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/main/swap
VG Name main
LV UUID xxxxxxxxxxxxxxxx
LV Write Access read/write
LV Status available
# open 1
LV Size 1.94 GB
Current LE 62
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
# mount
/dev/mapper/main-root on / type ext3 (rw,usrquota,grpquota)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/md1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
-
Seems likely that in step (4) of the OP and immediately after the second new disk gets sync'ed, the first large disk added has to be removed from the md arrays, have its partitions erased and then be re-added to md, before the relevant commands in http://wiki.contribs.org/Raid#Upgrading_the_Hard_Drive_Size are ran. Will confirm this, as soon as the procedure over here finishes.
-
Well, that didn't work. What worked was modifying steps 2- into the following:
2) After the array was in sync, I shutted down the system, removed the 240Gb (old) disk but did not add the 2nd new hard disk immediately. Instead I followed the instructions in http://wiki.contribs.org/Raid#Upgrading_the_Hard_Drive_Size, running the 4 commands listed. Now the partitions of the (single) disk reflected the real size of the disk!
3) I shutted down and added the 2nd new disk. I followed the instructions in http://wiki.contribs.org/Raid#Adding_another_Hard_Drive_Later
4) That's all
As you can see, the main difference is correcting for the incorrect partition size created in the smaller disk, by doing the lvm/md extension commands while there is only a single new/large disk in the array.
Not sure if this correct here or not, hope it is :)
-
Ok, this is turning into a monologue :)
This did work. That is, removing the disk from the array. However, for some reason the arrays md1/2 are listed as degraded, whereas there seems to be a ghost of the disk I removed:
# mdadm --detail /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Wed May 4 15:20:53 2011
Raid Level : raid1
Array Size : 488279552 (465.66 GiB 500.00 GB)
Device Size : 488279552 (465.66 GiB 500.00 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 13 14:20:40 2011
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 50cb3364:08bd93ef:7a106c13:4908045e
Events : 0.2359579
Number Major Minor RaidDevice State
0 0 0 - removed
1 8 2 1 active sync /dev/sda2
As a result, I can not add the second disk to the array. What should I do?
-
Problem solved, last post above was an error of mine.