Koozali.org: home of the SME Server

Contribs.org Forums => General Discussion => Topic started by: SchulzStefan on January 08, 2014, 10:29:29 AM

Title: SME 7.5.1 - RAID error
Post by: SchulzStefan on January 08, 2014, 10:29:29 AM
Raid error from #:su admin:

Raid status:

Personalities: [raid1]
md2: active raid1 sda2[0]
312464128 blocks [2/1] [U_]
md1: active raid1 sdb1[0]
104320 blocks [2/1] [U_]
unused devices: <none>

Translated from German:

The free disk space has to be qual size
manuel intervention could be needed

Actual state of disks:
installed disks: sda sdb
used disks: sda sdb

Both disks have equal size. No errors from smartctl.

#:cat /proc/mdstat

Personalities : [raid1]
md2 : active raid1 sda2[0]
      312464128 blocks [2/1] [U_]

md1 : active raid1 sdb1[0]
      104320 blocks [2/1] [U_]

unused devices: <none>

Could anyone give advice hot to fix this? Thank's in advance
stefan
Title: Re: SME 7.5.1 - RAID error
Post by: SchulzStefan on January 10, 2014, 03:21:06 PM
Some more information:

# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.01
  Creation Time : Fri Dec  3 17:14:14 2010
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
    Device Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Jan  9 22:00:10 2014
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : d153426f:ddabc8b2:a65dd0eb:02dd6596
         Events : 0.7686

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0        -      removed

and:

# mdadm --detail /dev/md2
/dev/md2:
        Version : 00.90.01
  Creation Time : Fri Dec  3 17:14:14 2010
     Raid Level : raid1
     Array Size : 312464128 (297.99 GiB 319.96 GB)
    Device Size : 312464128 (297.99 GiB 319.96 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jan 10 15:06:57 2014
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 85e555ec:d07983c8:af08e5dd:6de9ccdd
         Events : 0.48416526

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       0        0        -      removed

further:

[root@orion ~]# cat /proc/partitions
major minor  #blocks  name

   8     0  312571224 sda
   8     1     104391 sda1
   8     2  312464250 sda2
   8    16  312571224 sdb
   8    17     104391 sdb1
   8    18  312464250 sdb2
   9     1     104320 md1
   9     2  312464128 md2
 253     0  307855360 dm-0
 253     1    4587520 dm-1
   8    32  488386584 sdc
   8    33  488384001 sdc1

sdc is an external USB-drive for backups.

So can I fix the error just with:

1.) mdadm /dev/md1 -a /dev/sda1
2.) mdadm /dev/md2 -a /dev/sdb2

Thank's in adcance for any help.
stefan
Title: Re: SME 7.5.1 - RAID error
Post by: janet on January 11, 2014, 01:55:44 AM
SchulzStefan

Whenever drives become problematic as part of a RAID array, the first thing you should do is perform a thorough test on BOTH (or all) drives
You can use smartctl or the drive manufacturers boot CD or UBCD to test all drives (do the long thorough tests).
http://wiki.contribs.org/Monitor_Disk_Health

If you re-add drives to the array & they are faulty, then you will just have ongoing problems & potential loss of data.

Also see these
http://wiki.contribs.org/Raid:Manual_Rebuild
http://wiki.contribs.org/Raid
http://wiki.contribs.org/Raid:Growing
http://wiki.contribs.org/Raid:LSI_Monitoring
Title: Re: SME 7.5.1 - RAID error
Post by: SchulzStefan on January 11, 2014, 03:25:55 PM
janet

thank you for your reply.

As I wrote, smartctl is reporting no errors.

Quote
Both disks have equal size. No errors from smartctl.

I read all the links you gave me. I didn't find any hint in the pages (neither by googling) with exact this error which I reported. Therefore still my question, can I add the missing patitions in the arrays?

Quote
So can I fix the error just with:

1.) mdadm /dev/md1 -a /dev/sda1
2.) mdadm /dev/md2 -a /dev/sdb2

Assuming that the disks are physically in health, will the hot adding break the system or not?