Koozali.org: home of the SME Server

Yet another RAID Question

Offline gbentley

  • *****
  • 482
  • +0/-0
  • Forum Lurker
    • Earth
Yet another RAID Question
« on: May 30, 2012, 09:57:47 AM »
Been getting the degraded array email so ...

[root@cserver ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda2[0]
      488279488 blocks [2/1] [U_]

md1 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

unused devices: <none>

[root@server ~]# mdadm --query --detail /dev/md2
/dev/md2:
        Version : 00.90.01
  Creation Time : Wed Feb 10 13:33:22 2010
     Raid Level : raid1
     Array Size : 488279488 (465.66 GiB 500.00 GB)
    Device Size : 488279488 (465.66 GiB 500.00 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 2
    Persistence : Superblock is persistent
    Update Time : Wed May 30 08:55:06 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0
           UUID : aec1bdd8:0c9b617f:0cdc26b6:8a08e92e
         Events : 0.20528800
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       0        0        -      removed

When trying to add back in ...

[root@server ~]# mdadm --add /dev/md2 /dev/sda2
mdadm: Cannot open /dev/sda2: Device or resource busy

What next?


"If you don't know what you want, you end up with a lot you don't."

Offline Stefano

  • *
  • 10,894
  • +3/-0
Re: Yet another RAID Question
« Reply #1 on: May 30, 2012, 10:18:08 AM »
you are trying to add the wrong disk..

it should be sdb2..

try again and let us know..

P.S. while the array is re-syncronizing, please take a look at your /var/log/messages file.. there you'll find the reason why your sdb2 partition has been kicked out of array..
P.S.2: keep a new, good hd near to you..

Offline gbentley

  • *****
  • 482
  • +0/-0
  • Forum Lurker
    • Earth
Re: Yet another RAID Question
« Reply #2 on: May 30, 2012, 02:19:37 PM »
Of course! Ok, so this is bottom of /var/log/messages

May 30 11:21:48 server kernel: md: bind<sdb2>
May 30 11:21:48 server kernel: RAID1 conf printout:
May 30 11:21:48 server kernel:  --- wd:1 rd:2
May 30 11:21:48 server kernel:  disk 0, wo:0, o:1, dev:sda2
May 30 11:21:48 server kernel:  disk 1, wo:1, o:1, dev:sdb2
May 30 11:21:48 server kernel: md: syncing RAID array md2
May 30 11:21:48 server kernel: md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
May 30 11:21:48 server kernel: md: using maximum available idle IO bandwith (but not more than 200000 KB/sec) for reconstruction.
May 30 11:21:48 server kernel: md: using 128k window, over a total of 488279488 blocks.
May 30 11:29:48 server sshd(pam_unix)[4971]: session closed for user root
May 30 11:30:04 server sshd(pam_unix)[4935]: session closed for user root
May 30 13:00:28 server kernel: md: md2: sync done.
May 30 13:00:28 server kernel: RAID1 conf printout:
May 30 13:00:28 server kernel:  --- wd:2 rd:2
May 30 13:00:28 server kernel:  disk 0, wo:0, o:1, dev:sda2
May 30 13:00:28 server kernel:  disk 1, wo:0, o:1, dev:sdb2


[root@server log]# mdadm --query --detail /dev/md2
/dev/md2:
        Version : 00.90.01
  Creation Time : Wed Feb 10 13:33:22 2010
     Raid Level : raid1
     Array Size : 488279488 (465.66 GiB 500.00 GB)
    Device Size : 488279488 (465.66 GiB 500.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent
    Update Time : Wed May 30 13:16:20 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : aec1bdd8:0c9b617f:0cdc26b6:8a08e92e
         Events : 0.20535270

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2


[root@server log]# cat /proc/mdstat

Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
      488279488 blocks [2/2] [UU]

md1 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

unused devices: <none>

All go thanks!
"If you don't know what you want, you end up with a lot you don't."

Offline Stefano

  • *
  • 10,894
  • +3/-0
Re: Yet another RAID Question
« Reply #3 on: May 30, 2012, 02:27:30 PM »
search /var/log/messages for anything related to sdb2
Code: [Select]
grep sdb2 /var/log/messages

you'll find the reason why your hd has been kicked out of array..

Offline gbentley

  • *****
  • 482
  • +0/-0
  • Forum Lurker
    • Earth
Re: Yet another RAID Question
« Reply #4 on: May 30, 2012, 02:31:59 PM »

[root@server ~]# grep sdb2 /var/log/messages
May 30 07:46:57 server kernel:  sdb: sdb1 sdb2
May 30 07:46:57 server kernel: md: bind<sdb2>
May 30 07:46:57 server kernel: md: kicking non-fresh sdb2 from array!
May 30 07:46:57 server kernel: md: unbind<sdb2>
May 30 07:46:57 server kernel: md: export_rdev(sdb2)
May 30 07:46:57 server kernel: md: could not bd_claim sdb2.
May 30 07:46:57 server kernel: md: considering sdb2 ...
May 30 07:46:57 server kernel: md:  adding sdb2 ...
May 30 07:46:57 server kernel: md: md2 already running, cannot run sdb2
May 30 07:46:57 server kernel: md: export_rdev(sdb2)
May 30 08:10:51 server kernel:  sdb: sdb1 sdb2
May 30 08:10:51 server kernel: md: bind<sdb2>
May 30 08:10:51 server kernel: md: kicking non-fresh sdb2 from array!
May 30 08:10:51 server kernel: md: unbind<sdb2>
May 30 08:10:51 server kernel: md: export_rdev(sdb2)
May 30 08:10:51 server kernel: md: could not bd_claim sdb2.
May 30 08:10:51 server kernel: md: considering sdb2 ...
May 30 08:10:51 server kernel: md:  adding sdb2 ...
May 30 08:10:51 server kernel: md: md2 already running, cannot run sdb2
May 30 08:10:51 server kernel: md: export_rdev(sdb2)
May 30 11:21:48 server kernel: md: bind<sdb2>
May 30 11:21:48 server kernel:  disk 1, wo:1, o:1, dev:sdb2
May 30 13:00:28 server kernel:  disk 1, wo:0, o:1, dev:sdb2

Not sure what 'could not bd_claim' means?
"If you don't know what you want, you end up with a lot you don't."