Koozali.org: home of the SME Server

Obsolete Releases => SME Server 8.x => Topic started by: mike_mattos on February 18, 2013, 08:57:25 PM

Title: GRUB and RAID after update
Post by: mike_mattos on February 18, 2013, 08:57:25 PM
A previously reliable system froze at GRUB during a Software Installer reconfigure/reboot.  Manually (F11) selecting Drive 2 allowed it to boot, but did not fix the issue.   Changed HD boot sequence in BIOS so system is operational, but I fear next update!

IS http://wiki.contribs.org/Raid:Manual_Rebuild#HowTo:_Write_the_GRUB_boot_sector the answer? 

And if so, how do you know which letters are the 'good' and 'bad' volumes? 

Another quirk, Disk Redundancy  always said all volumes were UU, but was temporarily saying Manual Intervention was needed, now says everything is clean state.

Title: Re: GRUB and RAID after update
Post by: stephdl on February 18, 2013, 09:49:33 PM
yes this is the answer to reinstal the grub on your first hard disk which is probably /dev/sda

http://wiki.contribs.org/Raid:Manual_Rebuild#HowTo:_Write_the_GRUB_boot_sector

no problem try on the /dev/sda in a working sme, skip the dd step and try this

i sugger that your failling grub is on /dev/sda

Code: [Select]
[root@ ~]# grub

    GNU GRUB  version 0.95  (640K lower / 3072K upper memory)

 [ Minimal BASH-like line editing is supported.  For the first word, TAB
   lists possible command completions.  Anywhere else TAB lists the possible
   completions of a device/filename.]

grub> device (hd1) /dev/sda

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub> quit

after that you can try to boot on the first disk
Title: Re: GRUB and RAID after update
Post by: mike_mattos on February 18, 2013, 10:38:58 PM
Since I reversed the boot order of the drives,  is the 'first' drive the former 'sdb'?

That is, if you remove the drive sda, is the existing (former sdb) drive now sda?   

Logic suggests sda is always the first boot drive, but I'd hate to get it wrong.  So is it best to always grub both drives in a raid pair?
Title: Re: GRUB and RAID after update
Post by: stephdl on February 18, 2013, 10:50:11 PM
normaly they don't move specially for the raid configuration, you could have a lot of issue if the kernel decides to change the name of disk.
Title: Re: GRUB and RAID after update
Post by: mike_mattos on February 23, 2013, 10:49:31 PM
I've always hated IBM for using 0,1  for HD's, 1,2 for printers, and A,B for logical drives.

My 'new' hd was /dev/hdc  but also hd1 for GRUB.    It reported stage1" exists...no, and the install succeeded.  And extended test show no errors on either drive! 
Title: Re: GRUB and RAID after update
Post by: stephdl on February 24, 2013, 09:25:34 AM
therefore if you can boot on all your disk you should add resolved to the title of your topic to help other users to find a solution to their problem.