HI I upgraded my server hardware (new box) and migrated the data using the standard SME backup feature. (SME 7.5) I setup the new Box with a Fresh SME 7.5 O/S with all the latest upgrades.
Main difference with the two setups is that the old box had IDE RAID (2*80GB Hard Disks) and the new one has (2*80GB SATA Disks).
I upgraded the hardware as I was starting to get performance and some hardware failure issues.
The migration went well, no issues with data etc.
but after a day i began to get RAID error notifications. It showed the following errors.
The server still works flawlessly with no apparent performance issues.
A DegradedArray event has been detected on md device /dev/md1.
A DegradedArray event has been detected on md device /dev/md2.
I checked the admin console and got the following information from the RAID Management Menu:
┌─────Disk redundancy status as of Friday December 24, 2010 09:54:36───────┐on
│ Current RAID status: │
│ │
│ Personalities : [raid1] │
│ md2 : active raid1 sda2[0] │
│ 78043648 blocks [2/1] [U_] │
│ md1 : active raid1 sda1[0] │
│ 104320 blocks [2/1] [U_] │
│ unused devices: <none> │
│ │
│ │
│ The free disk count must equal one. │
│ │
│ Manual intervention may be required. │
│ │
│ Current disk status: │
│ │
│ Installed disks: sdc sdb sda │
│ Used disks: sda │
├─────────────────────────────────────────────────────────────────── 94% ─┤
│ < Next > │
└──────────────────────────────────────────────────────────────────────────┘
other information:
[root@server ~]# fdisk -l | more
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 9729 78043770 fd Linux raid autodetect
Disk /dev/sdb: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 fd Linux raid autodetect
/dev/sdb2 14 9726 78019672+ fd Linux raid autodetect
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2: 79.9 GB, 79916695552 bytes
2 heads, 4 sectors/track, 19510912 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-0: 78.0 GB, 78014054400 bytes
2 heads, 4 sectors/track, 19046400 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-1: 1879 MB, 1879048192 bytes
2 heads, 4 sectors/track, 458752 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/sdc: 250.0 GB, 250058268160 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 24283 195053166 83 Linux
/dev/sdc2 24284 30401 49142835 83 Linux
From this information is it correct to assume:
1. There is no Redundancy RAID setup over two disks?
2. There is only one disk being used??
3. Is the fault related to SATA/IDE Drive setup??
Ive have had a look through the RAID documentation but do not want to progress further unless i have a clear understanding of the situation. Can anyone enlighten me on what is happening on my server and what the solution is.
Thanks for any helpful input. (Seasons greetings to all)