Koozali.org: home of the SME Server
Legacy Forums => Experienced User Forum => Topic started by: mikepdock on January 23, 2005, 01:31:17 PM
-
Help !!!
Since this night i receive every 15 minutes the following message, concerning my softraid on the SME 6.0 Server:
Current configuration is:
Personalities : [raid1]
read_ahead 1024 sectors
md2 : active raid1 hda3[0] hdc3[1]
262016 blocks [2/2] [UU]
md1 : active raid1 hda2[0](F) hdc2[1]
79678272 blocks [2/1] [_U]
md0 : active raid1 hda1[0] hdc1[1]
102208 blocks [2/2] [UU]
unused devices: <none>
Last known good configuration was:
Personalities : [raid1]
read_ahead 1024 sectors
md2 : active raid1 hda3[0] hdc3[1]
262016 blocks [2/2] [UU]
md1 : active raid1 hda2[0] hdc2[1]
79678272 blocks [2/2] [UU]
md0 : active raid1 hda1[0] hdc1[1]
102208 blocks [2/2] [UU]
unused devices: <none>
Just befor, the Claim-Antivirus scanned the hole system, but without any problems - CAN SOMEBODY HELP ME ?
Thanx & Cheers,
Marc
-
Hi,
Try read this:
http://mirror.contribs.org/smeserver/contribs/dmay/smeserver/5.x/contrib/raidmonitor/raid-recovery-howto.html
/Jesper
-
... so, you think there MUST be a drive damaged ? Does that message really means that one of the drives MUST have hardware errors ???
Cheers,
M.
-
... so, you think there MUST be a drive damaged ? Does that message really means that one of the drives MUST have hardware errors ???
IME, the answer is almost always "No, the drive is fine". I am only speaking from my own experience of installing maybe 25 RAID-1 SME servers, but I have seen this several times & it's *always* been on servers that were configured with hda & hdc. For the last year, all my servers that I upgraded from earlier versions to SME 6 got their disks re-configured as hda & hdb and so far I haven't seen any RAID problems.
As a temporary measure, you can often rebuild the raid array by doing:
# /sbin/raidhotremove /dev/md1 /dev/hda2
# /sbin/raidhotadd /dev/md1 /dev/hda2
then
# cat /proc/mdstat
should show it rebuilding.
It is only a temporary measure - it will fail again in time...
-
I have experienced similar problems with a Raid1 server with HDDs configured as hda and hdc.
I have 2 question:
1. Why should the HDD configuration make a difference?
2. If each HDD is on a separate IDE channel (as primary in each case) how do I change the HDD configuration to hda and hdb as suggested?
I would be fascinated to know the answers - if only to avoid a panic every few months when RaidMonitor reports an HDD failure!
Nick
-
1. Why should the HDD configuration make a difference?
I have no hard evidence, but my suspicion is that it's a subtle timing bug that shows up under heavy load. Putting the disks on the same controller channel means that it doesn't trip this timing bug.
2. If each HDD is on a separate IDE channel (as primary in each case) how do I change the HDD configuration to hda and hdb as suggested?
A clean install is always the best way IMHO, but if you're feeling brave you could look at the raid recovery howto & fiddle it.