Koozali.org: home of the SME Server

Is my RAID rebuilt

Offline axessit

  • *****
  • 213
  • +0/-0
Is my RAID rebuilt
« on: March 22, 2011, 01:14:18 PM »
Have bought a larger drive to add capacity to my server and followed the howto in http://wiki.contribs.org/Raid:Manual_Rebuild since I am replacing a 2 x 160GB RAID 1 drive with a 2 x 1T drive.

I shutdown, removed sdb and replaced with larger drive, booted and added the drive to the array OK. I then checked the partition tables and then manually created the partitions, the same as the sda. When I put the disk back into the array, it said syncing disks, then back to the prompt, in less than a minute.

# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

unused devices: <none>


Seems to show the RAID array is good, as do the admin emails. I have rebooted no problems, but before I pull the drive A out and replace, how do I know the new drive is good ? I can't believe it copied 120GB in less than a minute.

Once that is done, I'll grow the partitions...

HP ML110 G5 server.




Offline byte

  • *
  • 2,183
  • +2/-0
Re: Is my RAID rebuilt
« Reply #1 on: March 22, 2011, 01:34:38 PM »
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

unused devices: <none>

Something don't look right as you don't appear to have md2, md1 is your /boot part which is why it took less than a minute.
--[byte]--

Have you filled in a Bug Report over @ http://bugs.contribs.org ? Please don't wait to be told this way you help us to help you/others - Thanks!

Offline axessit

  • *****
  • 213
  • +0/-0
Re: Is my RAID rebuilt
« Reply #2 on: March 28, 2011, 01:43:23 AM »
Having made some comparisons with another server, I realised I was missing the vital LVM on a RAID array. Turns out that somehow it was only on one disk. Having proceeded carefully at that point, using a third drive to clone and play with, I managed to shift my partition into an RAID array on the fly, adapting the very good tutorial on http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-ubuntu-10.04 (note some directory references differ to the SME setup) I managed to create and move my LVM onto a RAID array. Had a few issues when I tried to reboot - got dumped into the GRUB screen at one stage, then the dreaded kernel panic when trying to mount the LVM volume, but with searching the forums and bugs, got things back up and running.

Once I got my system into a functioning RAID state with one small drive and one large drive (partitioned the same size as the small one), I removed the small drive and replaced with a brand new large drive and just let the RAID array manager from the Admin console to automatically add it to the array and synch up.

At that stage, I did an fdisk -l and noticed the disk partition on the new drive was screwed. It showed a warning that the disk partition end was not on a boundary, pretty much as per the wiki http://wiki.contribs.org/Raid:Manual_Rebuild, so I removed the drive from the array and set it the same as the original, put back into the array etc as stated and it all synch'd up again.

All seems to happen so good. Just for good measure, I added the Grub back onto the drive.

Question. How do I know GRUB is loaded correctly on the second drive (or indeed the first) before I try and reboot ?

Then I tried to resize the disk as per the How to http://wiki.contribs.org/Raid. Things seemed to happen pretty quick, so I assumed it worked, but my shared drives didn't show any bigger. So I then removed each drive from the array one at a time, and repartitioned the LVM to maximum size, put back into the array and let synch up (all took ages of course, but all done on the fly so no server downtime). Once I did that, then I tried the grow command again. This time I noticed after the first command
Code: [Select]
mdadm --grow /dev/md2 --size=max that the command prompt returned straight away, but the disk went into full activity, which seemed a much better reaction. Maybe the instructions should show leave to grow - wait until full disk activity has ceased (took a good hour or so I think, not timing it) before proceeding further. Then the rest went well, it is now doing the last command (ext2online) and my drive is growing as I type.

In the end, I have ended up with a RAID volume md1 and md5. It all works, so I'm happy enough. But when it wouldn't boot, I was following some bug fix about creating a new array and renaming the volume group etc (all off the rescue CD).

I guess there is no reason to change, except for uniformity, but is there a simple way of changing the array to md2 without upsetting the boot ?