Koozali.org: home of the SME Server
		Obsolete Releases => SME Server 7.x => Topic started by: tviles on August 25, 2009, 03:43:10 AM
		
			
			- 
				After doing updates last Sunday I have gone into a degraded array. No I am not saying the updates did that. I did the bind updates and the last updates that just cameout. Getting email now.
From: mdadm monitoring [mailto:root@XXXX.local] 
Sent: Monday, August 24, 2009 3:21 PM
To: admin_raidreport@XXXX.local
Subject: DegradedArray event on /dev/md2:XXXX.XXXX.local
This is an automatically generated mail message from mdadm running on .local.
A DegradedArray event has been detected on md device /dev/md2.
login as: root
root@192.168 password:
Last login: Mon Aug 24 15:22:35 2009
[root@ ~]# fdisk -l|more
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/sda: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14        4427    35455455   fd  Linux raid autodetect
Disk /dev/sdb: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   fd  Linux raid autodetect
/dev/sdb2              14        4427    35455455   fd  Linux raid autodetect
Disk /dev/sdc: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1          13      104391   fd  Linux raid autodetect
/dev/sdc2              14        4427    35455455   fd  Linux raid autodetect
Disk /dev/sdd: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1   *           1          13      104391   fd  Linux raid autodetect
/dev/sdd2              14        4427    35455455   fd  Linux raid autodetect
Disk /dev/sde: 73.5 GB, 73543163904 bytes
255 heads, 63 sectors/track, 8941 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *           1          13      104391   fd  Linux raid autodetect
/dev/sde2              14        8941    71714160   fd  Linux raid autodetect
Disk /dev/sdf: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1        8924    71681998+  83  Linux
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2: 145.2 GB, 145224630272 bytes
2 heads, 4 sectors/track, 35455232 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-0: 143.0 GB, 143076098048 bytes
2 heads, 4 sectors/track, 34930688 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-1: 2080 MB, 2080374784 bytes
2 heads, 4 sectors/track, 507904 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      121601   976760001   83  Linux
Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1               1      121601   976760001    c  W95 FAT32 (LBA)
cat /proc/mdstat
Personalities : [raid1] [raid5]
md2 : active raid5 sdb2[0] sdd2[3] sdc2[2] sda2[1]
      141820928 blocks level 5, 256k chunk, algorithm 2 [5/4] [UUUU_]
md1 : active raid1 sdb1[0] sde1[4] sdd1[3] sdc1[2] sda1[1]
      104320 blocks [5/5] [UUUUU]
unused devices: <none>
mdadm --detail --scan --verbose
ARRAY /dev/md2 level=raid5 num-devices=5 UUID=b2e80e6f:02249760:d23cf111:0de63ca7
   devices=/dev/sdb2,/dev/sda2,/dev/sdc2,/dev/sdd2
ARRAY /dev/md1 level=raid1 num-devices=5 UUID=e0674b64:8e84c545:d4f50a42:e61623f1
   devices=/dev/sdb1,/dev/sda1,/dev/sdc1,/dev/sdd1,/dev/sde1
df -T
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/mapper/main-root
              ext3   137530456  33893516  96650804  26% /
/dev/md1      ext3      101018     45823     49979  48% /boot
none         tmpfs     1037412         0   1037412   0% /dev/shm
/dev/sdf1     ext3    70557052     86088  66886868   1% /mnt/jeremy
/dev/sdh1     ext3   961432072  96794200 815799872  11% /media/usbdisk1
/dev/sdg1     ext3   961432072  92081644 820512428  11% /mnt/tracy
See anything? 
Need anything else? I appreciate the help.
			 
			
			- 
				Does this help: http://wiki.contribs.org/Raid#Resynchronising_a_Failed_RAID
			
 
			
			- 
				Thank you very much Cactus.