Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: Funkenschuster on February 15, 2013, 03:55:14 PM
-
Hallo!
Can anybody help me?
My server don't boot. Here is a picture of the Screen:
https://skydrive.live.com/redir?resid=C5F7056D0A5071CA!693 (https://skydrive.live.com/redir?resid=C5F7056D0A5071CA!693)
I didn't change anything at the hardware.
-
effectively your array md2 has only 1 of 4 disk, it is not enough to start the logical volume /main
you can try to start on other kernel through the grub if you got their or trying to backup your data with a SystemRescueCd and reinstall your system.
-
Does that mean 3 hard drives are physically damaged (I can't believe that), other Hardware is damaged or is that a data error?
I tried to start the rescue mode from the 7.6 CD but there also occurs a error message: You don't have any Linux partitions.
I can select this kernels in grub, can you recommend me one or should I try which works?:
SME Server (2.6.9-89.31.1.ELsmp)
SME Server (2.6.9-89.31.1.EL)
SME Server (2.6.9-89.0.25.ELsmp)
SME Server-up (2.6.9-89.0.25.EL)
-
Does that mean 3 hard drives are physically damaged (I can't believe that), other Hardware is damaged or is that a data error?
yes..
do you read root's/admin's emails?
reinstall your server and restore from your backup
-
Yes, but there were no mail with warnings.
Could it be that it work in a other system?
-
Yes, but there were no mail with warnings.
Could it be that it work in a other system?
mmmhhh.. quite strange
in any case, now you can only reinstall and restore
-
You can try something......;this is your last chance
burn a iso of SystemRescueCd, start the computer, at prompt
-first you need to verify if your raid is started so
cat /proc/mdstat
-mount the lvm
vgchange -ay
if you are a lucky man then you can access to your data
mkdir /mnt/recup
mount /dev/main/root /mnt/recup
ls /mnt/recup
http://geekeries.de-labrusse.fr/?p=1287
-
Funkenschuster
Does that mean 3 hard drives are physically damaged (I can't believe that), other Hardware is damaged or is that a data error?
Anything is possible. I have had two drives in RAID1 fail at the same time (nasty !).
I would test all those drives before continuing to use them.
Usually you could do it with the SME Install CD & boot into Rescue Mode and then run smartctl
see
http://wiki.contribs.org/Monitor_Disk_Health
or alternatively if that is not possible, then download the Ultimate Boot CD (UBCD) iso from Net, and make a bootable CD.
Then run tests on every drive using the appropriate drive manufacturers diagnostic test program
-
You can try something......;this is your last chance
burn a iso of SystemRescueCd, start the computer, at prompt
-first you need to verify if your raid is started so
cat /proc/mdstat
This is the output when I entered cat /proc/mdstat
md2: inactive sdd2[2](S) sdb2[1](s) sdc2[0][S)
5860222464 blocks
md1: active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
104320 blocks [4/4] [UUUU]
unused devices: <None>
Should I proceed like stephdl had described?
-
it was your last chance as you can see /dev/md2 is out....
But there is something i don't understand, you have 4 disk for an array of raid1, what is the goal because we have generally up to 3 disk for a raid1, but never four.
have you played with your raid configuration ?
what is the return of that
mdadm --detail /dev/md2
mdadm --detail /dev/md1
-
it was your last chance as you can see /dev/md2 is out....
But there is something i don't understand, you have 4 disk for an array of raid1, what is the goal because we have generally up to 3 disk for a raid1, but never four.
have you played with your raid configuration ?
what is the return of that
I did not change the automatic settings. So there was a hot spare (before that crash)
Returns:
mdadm --detail /dev/md2
mdadm: md device /dev/md2 does not appear to be active
mdadm --detail /dev/md1
/dev/md1:
Version: 0.90
Creation Time: Sat Jan 28 10:00:05 2012
Raid Level: raid1
Array Size: 104320
Used Dec Size: 104320
Raid Devices: 4
Total Devices: 4
Preferred Minor: 1
Persistance: Superblock ist persistant
Update Time: Sat Feb 16 18:05:02 2013
State: clean
Active Devices: 4
Working Devices: 4
Failed Devices: 0
Spare Devices: 0
UUID: a0be1014:b0f13144:d98e56b3:035dadd0
Events: 0.1826
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
0 8 17 1 active sync /dev/sdb1
0 8 33 2 active sync /dev/sdc1
0 8 49 3 active sync /dev/sdd1
I didn't make changes. It was configured wiht one "hot spare".
-
Active Devices: 4
Working Devices: 4
Failed Devices: 0
Spare Devices: 0
i can't see spare disk here on you /dev/md1, all your disk are in the array as active disk...very strange.
i don't see something more to do for saving your Data....i believe it is done.