I thought I had set up a mirrored drive on this server when I set it up some time ago.
However now I look at it and it is a little unclear or at least Im confused.
When I go to the GUI I see this ...
┌───── Disk redundancy status as of Tuesday January 8, 2013 12:32:18 ───────┐on
│ Current disk status: │
│ │
│ Installed disks: sda sdb │
│ Used disks: sdb │
│ Free disks: sda
There is an unused disk drive in your system. Do you want to add it to
│ the existing RAID array(s)?
│ WARNING: ALL DATA ON THE NEW DISK WILL BE DESTROYED! │
│ < Yes > < No >
-------------------------------------------------------------
# It does not say the array is broken but that it was never set up, I think? This seems odd to me.
# So next I run "mdadm --query --detail /dev/md1"
-------------------------------------------------------------
/dev/md1:
Version : 00.90.01
Creation Time : Mon Mar 19 07:01:49 2012
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Device Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Jan 8 12:21:26 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 2f77238d:b7c25406:a66815f9:eb7a4e3d
Events : 0.29478
Number Major Minor RaidDevice State
0 0 0 - removed
1 8 17 1 active sync /dev/sdb1
--------------------------------------------------------------
And then mdadm --query --detail /dev/md2
--------------------------------------------------------------
/dev/md2:
Version : 00.90.01
Creation Time : Mon Mar 19 07:01:49 2012
Raid Level : raid1
Array Size : 244091520 (232.78 GiB 249.95 GB)
Device Size : 244091520 (232.78 GiB 249.95 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 8 12:38:51 2013
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 5bea4887:2e3a60be:27565ca9:f2de8ec0
Events : 0.9182277
Number Major Minor RaidDevice State
0 0 0 - removed
1 8 18 1 active sync /dev/sdb2
-----------------------------------------------------------------
# I can see a drive has been removed. Which I think means it has failed or been taken off line.
# Then I run mdstat
------------------------------------------------------------
[root@camp ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1]
244091520 blocks [2/1] [_U]
md1 : active raid1 sdb1[1]
104320 blocks [2/1] [_U]
-----------------------
I'm not sure but I think this is telling me that sda is completely off line.
If this data is true.
Should I use the gui to set up a second drive as a mirror as if its never been done before. I asume I will loose no data.
Or should I use a rebuilding command and on this I dont know if I should use,
mdadm --add /dev/md1 /dev/sdb2
or
mdadm --add /dev/md2 /dev/hdb2
Or perhaps niether. I think it has to do with the the difference between "clean degraded" and "active degraded" in the command, "mdadm --query --detail /dev/md1" or md2.
Please Advise.