Koozali.org: home of the SME Server
		Obsolete Releases => SME Server 7.x => Topic started by: Mjohnson on September 23, 2006, 06:03:40 AM
		
			
			- 
				Receivd the following email message:
 This is an automatically generated mail message from mdadm
 running on server
 
 A DegradedArray event had been detected on md device /dev/md2.
 
 
 Output for the commandmdadm --query --detail /dev/md[12] 
 was:
 /dev/md1:
 Version : 00.90.01
 Creation Time : Tue Aug  1 15:08:30 2006
 Raid Level : raid1
 Array Size : 104320 (101.88 MiB 106.82 MB)
 Device Size : 104320 (101.88 MiB 106.82 MB)
 Raid Devices : 2
 Total Devices : 2
 Preferred Minor : 1
 Persistence : Superblock is persistent
 
 Update Time : Fri Sep 22 22:13:48 2006
 State : clean
 Active Devices : 2
 Working Devices : 2
 Failed Devices : 0
 Spare Devices : 0
 
 
 Number   Major   Minor   RaidDevice State
 0       3        1        0      active sync   /dev/hda1
 1       3       65        1      active sync   /dev/hdb1
 UUID : 3bfddc38:78c98168:51016b85:bdeaa0a7
 Events : 0.555
 /dev/md2:
 Version : 00.90.01
 Creation Time : Tue Aug  1 15:07:11 2006
 Raid Level : raid1
 Array Size : 117113728 (111.69 GiB 119.92 GB)
 Device Size : 117113728 (111.69 GiB 119.92 GB)
 Raid Devices : 2
 Total Devices : 1
 Preferred Minor : 2
 Persistence : Superblock is persistent
 
 Update Time : Fri Sep 22 22:57:43 2006
 State : clean, degraded
 Active Devices : 1
 Working Devices : 1
 Failed Devices : 0
 Spare Devices : 0
 
 
 Number   Major   Minor   RaidDevice State
 0       0        0       -1      removed
 1       3       66        1      active sync   /dev/hdb2
 UUID : 324652ca:19f16aaf:6d3b5367:b6ad6dda
 Events : 0.816152
 
 
 What I am confused about is that md1 appears to be OK, but md2 appears to have failed.  They are differnt partitions on the same drive, correct?
 Do I replace the drive?
 
 Many thanks.
- 
				First of all from what I can see you have both your raid devices hanging of the same channel, which is not a very good thing. It is recommended that your disk drives be set as the primary device on seperate channels (/dev/hda and /dev/hdc). Looking at your problem, I don't think the drive has failed because /dev/hda1 is still a member of /dev/md1. For some reason /dev/hda2 has gone MIA from /dev/md2. The output actually says that it has been removed.
 Number Major Minor RaidDevice State
 0 0 0 -1 removed
 1 3 66 1 active sync /dev/hdb2
 UUID : 324652ca:19f16aaf:6d3b5367:b6ad6dda
 Events : 0.816152
 
 Maybe the array is rebuilding itself after a dirty shutdown?? Have a look at:
 #cat /proc/mdstat
 to see if it is reconstructing. If not try hot adding the drive back into the array:
 #mdadm -a /dev/md2 /dev/hda2
 Lloyd
- 
				
 #cat /proc/mdstat
 to see if it is reconstructing.
 Lloyd
 
 
 I like this one...
 watch -n .1 cat /proc/mdstat
- 
				Gentlemen,
 
 Thank you for your very helpful replies.
 
 The mirror is recovering now.  I am curious as to what went askew here.  I will take another look under the hood and see if anything jumps out at me.
 
 MJ