Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: steve288 on January 08, 2013, 08:59:13 PM
-
I thought I had set up a mirrored drive on this server when I set it up some time ago.
However now I look at it and it is a little unclear or at least Im confused.
When I go to the GUI I see this ...
┌───── Disk redundancy status as of Tuesday January 8, 2013 12:32:18 ───────┐on
│ Current disk status: │
│ │
│ Installed disks: sda sdb │
│ Used disks: sdb │
│ Free disks: sda
There is an unused disk drive in your system. Do you want to add it to
│ the existing RAID array(s)?
│ WARNING: ALL DATA ON THE NEW DISK WILL BE DESTROYED! │
│ < Yes > < No >
-------------------------------------------------------------
# It does not say the array is broken but that it was never set up, I think? This seems odd to me.
# So next I run "mdadm --query --detail /dev/md1"
-------------------------------------------------------------
/dev/md1:
Version : 00.90.01
Creation Time : Mon Mar 19 07:01:49 2012
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Device Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Jan 8 12:21:26 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 2f77238d:b7c25406:a66815f9:eb7a4e3d
Events : 0.29478
Number Major Minor RaidDevice State
0 0 0 - removed
1 8 17 1 active sync /dev/sdb1
--------------------------------------------------------------
And then mdadm --query --detail /dev/md2
--------------------------------------------------------------
/dev/md2:
Version : 00.90.01
Creation Time : Mon Mar 19 07:01:49 2012
Raid Level : raid1
Array Size : 244091520 (232.78 GiB 249.95 GB)
Device Size : 244091520 (232.78 GiB 249.95 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 8 12:38:51 2013
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 5bea4887:2e3a60be:27565ca9:f2de8ec0
Events : 0.9182277
Number Major Minor RaidDevice State
0 0 0 - removed
1 8 18 1 active sync /dev/sdb2
-----------------------------------------------------------------
# I can see a drive has been removed. Which I think means it has failed or been taken off line.
# Then I run mdstat
------------------------------------------------------------
[root@camp ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1]
244091520 blocks [2/1] [_U]
md1 : active raid1 sdb1[1]
104320 blocks [2/1] [_U]
-----------------------
I'm not sure but I think this is telling me that sda is completely off line.
If this data is true.
Should I use the gui to set up a second drive as a mirror as if its never been done before. I asume I will loose no data.
Or should I use a rebuilding command and on this I dont know if I should use,
mdadm --add /dev/md1 /dev/sdb2
or
mdadm --add /dev/md2 /dev/hdb2
Or perhaps niether. I think it has to do with the the difference between "clean degraded" and "active degraded" in the command, "mdadm --query --detail /dev/md1" or md2.
Please Advise.
-
steve288
As your server history is vague even to yourself, then it's hard for us to comment re what has or has not happened etc.
When I go to the GUI I see this ...
┌───── Disk redundancy status as of Tuesday January 8, 2013 12:32:18 ───────┐on
│ Current disk status: │
│ │
│ Installed disks: sda sdb │
│ Used disks: sdb │
│ Free disks: sda
There is an unused disk drive in your system. Do you want to add it to
│ the existing RAID array(s)?
│ WARNING: ALL DATA ON THE NEW DISK WILL BE DESTROYED! │
│ < Yes > < No >
Just select Yes to allow the system to add & resync the drive.
If you value yout data integrity, you should really check both/all drives IMMEDIATELY
http://wiki.contribs.org/Monitor_Disk_Health
or download UBCD (google it) & run drive manufacturers diagnostic tests
-
Hi,
as Mary said, it's hard to know what you did or didn't do.
I presume (but you need to check) that they are identical disks ??
Were both disks in the machine when you installed ?
There are various docs in the wiki if you look (use the mediawiki search box on the home page) :
http://wiki.contribs.org/Raid:Manual_Rebuild
http://wiki.contribs.org/AddExtraHardDisk
You also could try mounting the drives with a rescue cd and see if any partitions were ever created on sdb or if it is empty.
B. Rgds
John
-
Were both disks in the machine when you installed ?
My guess is, no. This looks like a system installed on one disk, with a second added later, but not yet added to mirror. The second disk has become /dev/sda, the first moved from sda to sdb.
-
My guess is, no. This looks like a system installed on one disk, with a second added later, but not yet added to mirror. The second disk has become /dev/sda, the first moved from sda to sdb.
Won't argue with that ;-) Be interesting to hear the OPs comments.
B. Rgds
John
-
Won't argue with that
But I will. I didn't read carefully enough and missed this:
0 0 0 - removed
1 8 18 1 active sync /dev/sdb2
-----------------------------------------------------------------
# I can see a drive has been removed. Which I think means it has failed or been taken off line.
I don't know exactly when you'll see 'removed' but it's not what you'll see with a failed drive.
I'd suggest OP runs 'history' and reminds him/herself what commands have been run from the root login. It's likely there's been manual intervention here.
-
Steve288, you need to tell us if the disks are identical.
Do fdisk -l /dev/sda to list the first disk, then fdisk -l /dev/sdb to list the second and post results.
You may need to manually create the partition tables and add the disk back into your array.