Koozali.org: home of the SME Server
Contribs.org Forums => Koozali SME Server 10.x => Topic started by: umbi on November 14, 2021, 11:49:47 PM
-
Hello, before i start to migrate the V9 backup to V10 i testet the raid1.
I installed the SME V10 with 2 same 512GB SSD Disk.
When all worked finde, i did all the updates from Webadmin-Pannel.
I turned off the V10 Server correctly by shudown command in pannel.
then i rebootet the V10 Server with one extracted Disk.
then turned off per command and re installed the second disk again, to see if the rebuild will start. I get this errors in the LOG:
Nov 14 23:25:42 my-v10-server kernel: [ 31.307952] xor: measuring software checksum speed
Nov 14 23:25:42 my-v10-server kernel: [ 31.317005] prefetch64-sse: 9076.000 MB/sec
Nov 14 23:25:42 my-v10-server kernel: [ 31.327005] generic_sse: 8080.000 MB/sec
Nov 14 23:25:42 my-v10-server kernel: [ 31.327009] xor: using function: prefetch64-sse (9076.000 MB/sec)
Nov 14 23:25:42 my-v10-server kernel: [ 31.366012] raid6: sse2x1 gen() 2976 MB/s
Nov 14 23:25:42 my-v10-server kernel: [ 31.383024] raid6: sse2x2 gen() 3683 MB/s
Nov 14 23:25:42 my-v10-server kernel: [ 31.400012] raid6: sse2x4 gen() 6898 MB/s
Nov 14 23:25:42 my-v10-server kernel: [ 31.400020] raid6: using algorithm sse2x4 gen() (6898 MB/s)
Nov 14 23:25:42 my-v10-server kernel: [ 31.400023] raid6: using ssse3x2 recovery algorithm
Nov 14 23:25:42 my-v10-server kernel: [ 31.585577] Btrfs loaded, crc32c=crc32c-generic
Nov 14 23:25:42 my-v10-server kernel: [ 31.618573] fuse init (API version 7.23)
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sda1: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sda2: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sdb1: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/50mounted-tests on /dev/sdb2
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/05efi on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 05efi: debug: Not on UEFI platform
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/10freedos on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 10freedos: debug: /dev/md0 is not a FAT partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/10qnx on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 10qnx: debug: /dev/md0 is not a QNX4 partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/20macosx on mounted /dev/md0
Nov 14 23:25:44 my-v10-server macosx-prober: debug: /dev/md0 is not an HFS+ partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/20microsoft on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 20microsoft: debug: /dev/md0 is not a MS partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/30utility on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 30utility: debug: /dev/md0 is not a FAT partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/40lsb on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/70hurd on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/80minix on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/83haiku on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 83haiku: debug: /dev/md0 is not a BeFS partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/90linux-distro on mounted /dev/md0
Nov 14 23:25:45 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/90solaris on mounted /dev/md0
Nov 14 23:25:45 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/50mounted-tests on /dev/md1
Nov 14 23:25:45 my-v10-server root: 50mounted-tests: debug: skipping LVM2 Volume Group on /dev/md1
Nov 14 23:25:45 my-v10-server root: os-prober: debug: /dev/mapper/main-swap: is active swap
-----------------------
When i go to the admin pannel it writes:
raid1
md0: active raid1 sdb1 sda1 [ 0 ]
md1: active raid1 sda2 [ 0 ]
only some of raid-item are faulty.
A manual work may be neccessary (translated from german)
---------------------
mdadm sent me:
This is an automatically generated mail message from mdadm running on www.mywebsite.com
A DegradedArray event had been detected on md device /dev/md/1.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [ raid1 ]
md0 : active raid1 sdb1[ 1 ] sda1[ 0 ]
510976 blocks super 1.2 [2/2] [ UU ]
bitmap: 1/1 pages [4KB], 65536KB chunk
md1 : active raid1 sda2[0]
976116736 blocks super 1.2 [ 2/1 ] [ U_ ]
bitmap: 5/8 pages [20KB], 65536KB chunk
unused devices: <none>
what can i do ?
Thank you in advance - Umbi
-
exactly what is said there, manual intervention.
what to do?
see the wiki and search for raid.
-
Thank you for fast answer Jean-Philippe
I found this here: https://wiki.koozali.org/Raid
but im scared to do something wrong as im not so good in raid knowledge
is maybe this the smooking gun ?
to add the physical partition back and rebuild the raid partition.
[root@sme]# mdadm --add /dev/md1 /dev/hda2 (or sdb2 ?)
-
Do you have a partition called:
/dev/hda2
?
Don't interpret things so literally - you have to adapt it to your own hardware.
READ your logs and READ your mdstat file.
but im scared to do something wrong as im not so good in raid knowledge
If this is a only a test machine what are you worried about?
It is a good time to learn.
-
Thank you for your answer.
No its not only a test machin, im preparing to migrate v9 to v10 tonightn - ohhh... :-)
So i tested to do this command:
[root@sme]#mdadm --add /dev/md1 /dev/sdb2
it worked and it came back again... Raid state is perfect now
thank you :-)
greez
umbi
-
No its not only a test machine
OK.
before i start to migrate the V9 backup to V10 i tested the raid1.
But you were testing......
I wouldn't be testing that just before an upgrade!!!
preparing to migrate v9 to v10 tonight
Join the club!
-
So i tested to do this command:
[root@sme]#mdadm --add /dev/md1 /dev/sdb2
it worked and it came back again... Raid state is perfect now
thank you :-)
greez
umbi
Rejoice and toast the gods :-) along the journey you just increased your knowledge by a goodly amount
-
No its not only a test machin, im preparing to migrate v9 to v10 tonightn - ohhh... :-)
my 2€c.. use a VM, nowadays you can create one with as many disks as you prefer even on a laptop (you can add 4 or more thin dynamic disks.. ) and then play.. learn to break it and how to repair it ;-)
-
and then play.. learn to break it and how to repair it ;-)
This, :-) might add it to the wiki :-)