Koozali.org: home of the SME Server

Raid problem

Offline gromit60

  • ***
  • 98
  • +0/-0
Raid problem
« on: March 25, 2024, 12:58:52 PM »
Hi! I've a problem with the raid in a SME 10.1 with the last updates.
I've three hard disk (WD Red 2 Tb) with a raid 1 (sda+sdb) and a spare (sdc). Some times ago one of them has failed (sdb) so it was sostiuited by sdc. Now I want to replace sdb with another disk (sdd) acting as a new spare. The problem is that when I detach physically sdb the server doesn't start. Exploring the bios about the boot order there is a voice like "sme server"; that voice disappears if I detach sdb.

Online Jean-Philippe Pialasse

  • *
  • 2,765
  • +11/-0
  • aka Unnilennium
    • http://smeserver.pialasse.com
Re: Raid problem
« Reply #1 on: March 25, 2024, 04:32:12 PM »
hardware raid or software raid ?

what is the output of
Code: [Select]
cat /proc/mdstat

also
Code: [Select]
lsblk

and
Code: [Select]
file -s /dev/sda
file -s /dev/sdb
file -s /dev/sdc

Offline gromit60

  • ***
  • 98
  • +0/-0
Re: Raid problem
« Reply #2 on: March 25, 2024, 04:47:37 PM »
hardware raid or software raid ?

Soft raid

what is the output of
Code: [Select]
cat /proc/mdstat

[root@mail ~]# cat /proc/mdstat
Personalities : [raid1]
md9 : active raid1 sdc2[3] sda2[2]
      204736 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[2] sdc1[3]
      510976 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid1 sda3[2] sdc3[3]
      1952664576 blocks super 1.2 [2/2] [UU]
      bitmap: 7/15 pages [28KB], 65536KB chunk

unused devices: <none>

Quote
also
Code: [Select]
lsblk

[root@mail ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda               8:0    0  1,8T  0 disk
├─sda1            8:1    0  500M  0 part
│ └─md0           9:0    0  499M  0 raid1 /boot
├─sda2            8:2    0  200M  0 part
│ └─md9           9:9    0  200M  0 raid1 /boot/efi
└─sda3            8:3    0  1,8T  0 part
  └─md1           9:1    0  1,8T  0 raid1
    ├─main-root 253:0    0  1,8T  0 lvm   /
    └─main-swap 253:1    0  7,8G  0 lvm   [SWAP]
sdb               8:16   0  1,8T  0 disk
├─sdb1            8:17   0  500M  0 part
├─sdb2            8:18   0  200M  0 part
└─sdb3            8:19   0  1,8T  0 part
sdc               8:32   0  1,8T  0 disk
├─sdc1            8:33   0  500M  0 part
│ └─md0           9:0    0  499M  0 raid1 /boot
├─sdc2            8:34   0  200M  0 part
│ └─md9           9:9    0  200M  0 raid1 /boot/efi
└─sdc3            8:35   0  1,8T  0 part
  └─md1           9:1    0  1,8T  0 raid1
    ├─main-root 253:0    0  1,8T  0 lvm   /
    └─main-swap 253:1    0  7,8G  0 lvm   [SWAP]
sdd               8:48   0  1,8T  0 disk
└─sdd1            8:49   0   16M  0 part


Quote
and
Code: [Select]
file -s /dev/sda
file -s /dev/sdb
file -s /dev/sdc

[root@mail ~]# file -s /dev/sda
/dev/sda: x86 boot sector; partition 1: ID=0xee, starthead 0, startsector 1, 3907029167 sectors, extended partition table (last)\011, code offset 0x0

[root@mail ~]# file -s /dev/sdb
/dev/sdb: x86 boot sector; partition 1: ID=0xee, starthead 0, startsector 1, 3907029167 sectors, extended partition table (last)\011, code offset 0x0

[root@mail ~]# file -s /dev/sdc
/dev/sdc: x86 boot sector; partition 1: ID=0xee, starthead 0, startsector 1, 3907029167 sectors, extended partition table (last)\011, code offset 0x0
« Last Edit: March 25, 2024, 04:50:06 PM by gromit60 »

Offline gromit60

  • ***
  • 98
  • +0/-0
Re: Raid problem
« Reply #3 on: March 25, 2024, 04:54:31 PM »
This is the output of Raidstatus on 8 february 2024:

dev/md0:
           Version : 1.2
     Creation Time : Wed Feb  8 16:39:15 2023
        Raid Level : raid1
        Array Size : 510976 (499.00 MiB 523.24 MB)
     Used Dev Size : 510976 (499.00 MiB 523.24 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Feb  7 04:04:58 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:0
              UUID : 84c8a147:73d7198c:120fc898:12f01df2
            Events : 1477

    Number   Major   Minor   RaidDevice State
       2       8        1        0      active sync   /dev/sda1
       3       8       33        1      active sync   /dev/sdc1

       1       8       17        -      faulty   /dev/sdb1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Feb  8 16:38:59 2023
        Raid Level : raid1
        Array Size : 1952664576 (1862.21 GiB 1999.53 GB)
     Used Dev Size : 1952664576 (1862.21 GiB 1999.53 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Feb  8 03:58:56 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:1
              UUID : f7b543a4:889c7921:bd624e3c:a6f486ea
            Events : 4449245

    Number   Major   Minor   RaidDevice State
       2       8        3        0      active sync   /dev/sda3
       3       8       35        1      active sync   /dev/sdc3

       1       8       19        -      faulty   /dev/sdb3
/dev/md9:
           Version : 1.0
     Creation Time : Wed Feb  8 16:38:53 2023
        Raid Level : raid1
        Array Size : 204736 (199.94 MiB 209.65 MB)
     Used Dev Size : 204736 (199.94 MiB 209.65 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Feb  4 01:00:44 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:9
              UUID : 2dc14045:23285690:32f6d807:7624155f
            Events : 318

    Number   Major   Minor   RaidDevice State
       2       8        2        0      active sync   /dev/sda2
       3       8       34        1      active sync   /dev/sdc2

       1       8       18        -      faulty   /dev/sdb2

On the 21th of March it is:

/dev/md0:
           Version : 1.2
     Creation Time : Wed Feb  8 16:39:15 2023
        Raid Level : raid1
        Array Size : 510976 (499.00 MiB 523.24 MB)
     Used Dev Size : 510976 (499.00 MiB 523.24 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Mar 21 03:54:47 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:0
              UUID : 84c8a147:73d7198c:120fc898:12f01df2
            Events : 1495

    Number   Major   Minor   RaidDevice State
       2       8        1        0      active sync   /dev/sda1
       3       8       33        1      active sync   /dev/sdc1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Feb  8 16:38:59 2023
        Raid Level : raid1
        Array Size : 1952664576 (1862.21 GiB 1999.53 GB)
     Used Dev Size : 1952664576 (1862.21 GiB 1999.53 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Mar 21 04:07:55 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:1
              UUID : f7b543a4:889c7921:bd624e3c:a6f486ea
            Events : 4476844

    Number   Major   Minor   RaidDevice State
       2       8        3        0      active sync   /dev/sda3
       3       8       35        1      active sync   /dev/sdc3
/dev/md9:
           Version : 1.0
     Creation Time : Wed Feb  8 16:38:53 2023
        Raid Level : raid1
        Array Size : 204736 (199.94 MiB 209.65 MB)
     Used Dev Size : 204736 (199.94 MiB 209.65 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Mar 17 01:00:16 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:9
              UUID : 2dc14045:23285690:32f6d807:7624155f
            Events : 334

    Number   Major   Minor   RaidDevice State
       2       8        2        0      active sync   /dev/sda2
       3       8       34        1      active sync   /dev/sdc2

Offline gromit60

  • ***
  • 98
  • +0/-0
Re: Raid problem
« Reply #4 on: April 03, 2024, 07:42:55 PM »
Do anyone have an idea about why if I detach sdb the server doesn't start?

Online Jean-Philippe Pialasse

  • *
  • 2,765
  • +11/-0
  • aka Unnilennium
    • http://smeserver.pialasse.com
Re: Raid problem
« Reply #5 on: April 03, 2024, 07:51:25 PM »
could be one of those:
- missing grub in mbr on other disks (you demonstrated that they have it)
- bios
- wrong entry in the mbr pointing all to /dev/sdb instead of current disk

if this is the third one you either need :
- to manually rewrite the mbr with grub pointing to right disk
- or you could swap sdc at sda and sda at former sdb position. This way the mbr on former sda will point at itself as it is now on second position.

 (put labels on disk before starting to swap, because you could end booting on the disk not up to date, i have the experience of rebuilding from the failed disk with weeks old data and taking few days before realising)

nb in your case you are using efi so this add one more layer of complexity you need to search for : mbr grub efi software raid

https://unix.stackexchange.com/questions/230349/how-to-correctly-install-grub-on-a-soft-raid-1/230448#230448
« Last Edit: April 03, 2024, 07:55:10 PM by Jean-Philippe Pialasse »

Offline ReetP

  • *
  • 3,740
  • +5/-0
Re: Raid problem
« Reply #6 on: April 04, 2024, 09:28:03 PM »
Remember that grub uses (hdx) etc rather than /dev/sdx

Just because your disk is sdb doesn't mean grub think it is (hd1) - I got caught by similar years ago.

https://wiki.koozali.org/Raid:Manual_Rebuild#HowTo:_Write_the_GRUB_boot_sector

https://wiki.koozali.org/Hard_Disk_Partitioning#HowTo:_Write_the_GRUB_boot_sector
...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation