Koozali.org: home of the SME Server
Obsolete Releases => SME Server 8.x => Topic started by: jameswilson on November 25, 2013, 05:03:35 PM
-
a few weeks ago i had a dead raid 6 array, in the end i swapped out one of the faulty drives and the machine came back up and spent a couple of days resyncing.
Unfortunatly i now have this
Current RAID status:
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 sdb[1]
104320 blocks [7/1] [_U_____]
md127 : active raid1 sda1[0] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6]
104320 blocks [7/6] [U_UUUUU]
md2 : active raid6 sdg2[1] sdf2[5] sde2[4] sdd2[3] sdc2[2] sda2[0]
7813629952 blocks level 6, 256k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
devices: $VAR1 = {
'/dev/md2' => {
'PreferredMinor' => '2',
'RaidLevel' => 'raid6',
'ChunkSize' => '256K',
'2' => ' 2 8 34 2 active sync /dev/sdc2
',
'State' => 'clean',
'DeviceSize' => '1953407488',
'1' => ' 1 8 98 1 active sync /dev/sdg2
',
'SpareDevices' => '0',
'0' => ' 0 8 2 0 active sync /dev/sda2
',
'RaidDevices' => '6',
'FailedDevices' => '0',
'UpdateTime' => 'Mon Nov 25 16:01:18 2013',
'ArraySize' => '7813629952',
'UUID' => 'a09e8b64:b3493c94:41c75907:11b8d940',
'CreationTime' => 'Tue Jun 19 18:37:05 2012',
'WorkingDevices' => '6',
'3' => ' 3 8 50 3 active sync /dev/sdd2
',
'Persistence' => 'Superblock is persistent',
'4' => ' 4 8 66 4 active sync /dev/sde2
',
'UsedDisks' => [
'sda',
'sdg',
'sdc',
'sdd',
'sde',
'sdf'
],
'Version' => '0.90',
'TotalDevices' => '6',
'Events' => '0.756186',
'5' => ' 5 8 82 5 active sync /dev/sdf2
',
'ActiveDevices' => '6'
},
'/dev/md127' => {
'PreferredMinor' => '127',
'RaidLevel' => 'raid1',
'2' => ' 2 8 33 2 active sync /dev/sdc1
',
'State' => 'clean, degraded',
'DeviceSize' => '104320',
'SpareDevices' => '0',
'0' => ' 0 8 1 0 active sync /dev/sda1
',
'RaidDevices' => '7',
'FailedDevices' => '0',
'UpdateTime' => 'Sun Nov 24 04:22:02 2013',
'ArraySize' => '104320',
'UUID' => '3de57a77:29068a90:59d0e91f:66e7f243',
'6' => ' 6 8 97 6 active sync /dev/sdg1
',
'CreationTime' => 'Tue Jun 19 18:37:04 2012',
'WorkingDevices' => '6',
'3' => ' 3 8 49 3 active sync /dev/sdd1
',
'Persistence' => 'Superblock is persistent',
'4' => ' 4 8 65 4 active sync /dev/sde1
',
'UsedDisks' => [
'sda',
'sdc',
'sdd',
'sde',
'sdf',
'sdg'
],
'Version' => '0.90',
'TotalDevices' => '6',
'Events' => '0.474',
'5' => ' 5 8 81 5 active sync /dev/sdf1
',
'ActiveDevices' => '6'
},
'/dev/md1' => {
'PreferredMinor' => '1',
'RaidLevel' => 'raid1',
'State' => 'clean, degraded',
'DeviceSize' => '104320',
'1' => ' 1 8 16 1 active sync /dev/sdb
',
'SpareDevices' => '0',
'RaidDevices' => '7',
'FailedDevices' => '0',
'UpdateTime' => 'Sun Nov 24 04:22:02 2013',
'ArraySize' => '104320',
'UUID' => '3de57a77:29068a90:59d0e91f:66e7f243',
'CreationTime' => 'Tue Jun 19 18:37:04 2012',
'WorkingDevices' => '1',
'Persistence' => 'Superblock is persistent',
'UsedDisks' => [
'sdb'
],
'Version' => '0.90',
'TotalDevices' => '1',
'Events' => '0.544',
'ActiveDevices' => '1'
}
};
used_disks: $VAR1 = {
'sde' => 2,
'sdc' => 2,
'sda' => 2,
'sdb' => 1,
'sdg' => 2,
'sdd' => 2,
'sdf' => 2
};
unclean: /dev/md127 => clean, degraded /dev/md1 => clean, degraded
recovering:
lqqqqqDisk redundancy status as of Monday November 25, 2013 16:01:21qqqqqqqkon
x Current RAID status: x
x x
x Personalities : [raid6] [raid5] [raid4] [raid1] x
x md1 : active raid1 sdb[1] x
x 104320 blocks [7/1] [_U_____] x
x md127 : active raid1 sda1[0] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6] x
x 104320 blocks [7/6] [U_UUUUU] x
x md2 : active raid6 sdg2[1] sdf2[5] sde2[4] sdd2[3] sdc2[2] sda2[0] x
x 7813629952 blocks level 6, 256k chunk, algorithm 2 [6/6] [UUUUUU] x
x unused devices: <none> x
x x
x x
x Only some of the RAID devices are unclean. x
x x
x Manual intervention may be required. x
x x
x x
x x
tqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqu
x < Next > x
mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj
Should i be concerned and or how do i get it back right.
Thanks
James
-
IMHO you should set as faulty all /dev/sdX1 used in /dev/md127, one at time, and add them to /dev/md1..
at the end of the routine you should delete safely /dev/md127
in any case, before doing anything, I would simulate it on a virtual machine
man mdadm
-
im way out of my depth here stefano but i got it working last time maybe i wont bugger it up
-
thanks though
-
that's why I suggested you to make some tests with virtual machines :-)