Koozali.org: home of the SME Server

lvm2 volume group and persistence

johndknm

lvm2 volume group and persistence
« on: September 17, 2006, 06:21:41 AM »
Hello can i please ask for some help
Ive been reading and learning about LVM and adding disks (never heard of it until a week ago)

I did a fresh install of sme7 and then loaded VMware server in.  All working fine.
The install was on a pair of 80gig sata drives (auto raid setup by the system went fine, I have a volume group Main as expected)

Basically I have 3 SATA drives off a promise controller set up as a raid 5 array which I installed later.  This is 660gig of raid 5 workspace, where I want to put VMware images. The drives are sdc sdd and sde

So I climbed the learning curve of setting up the raid 5 array which goes something like this (probably not exactly what I typed at the time when reading docs, but the idea is there):

fdisk on each disk to create a primary partition, set type to fd
mdadm --create to create md3 and use sdc sdd sde
pvcreate /dev/md3
check /proc/mdstat  -->  I have a new md3 raid5 personality with UUU for the 3 disks.  Afaik its all working correctly.  The raid 5 array took about 130 mins to create.

create a new volume group vg_data
create a new logical partition in the volume group called raid5
create /mnt/test
mount /dev/vg_data/raid5 /mnt/test  

All goes okay.  The mount works when I do it at the command line.

Now I want these changes to survive a reboot and add the mount to fstab.  But they dont.

mount point not found!

On reboot the md3 personality is not detected automatically and the fstab mount fails and the volume group is also not detected/reported with vgdisplay.

I read and thought I needed to have a line for md3 in the mdadm.conf and added this:
ARRAY /dev/md3 level=raid5 num-devices=3 UUID=ABIGLONGUUIDNUMBER

reboot --> still no volume group and md3 doesnt get detected despite the new line above

So what I can do now is:
mdadm -a y   to assemble the array (it then detects md3 correctly)
vgscan (to find the volume group successfully)
vgchange -a y (to activate the changes)

and the volume group and array are all accessible and working and can be mounted.

Could some kind soul please tell me where I should be adding stuff to get this to happen automatically?  I think I have successfully gotten the array working (which surprised me no end!).

I know its nonstandard.  I do  believe there is a need for adding user data on a separate array that can be moved.

Thanks in advance

Offline ldkeen

  • *
  • 403
  • +0/-0
Re: lvm2 volume group and persistence
« Reply #1 on: September 17, 2006, 11:09:27 AM »
John,
Quote from: "johndknm"
Now I want these changes to survive a reboot and add the mount to fstab.  But they dont.

mount point not found!


What was the entry that you added to fstab? Did you format the LogicalVolume? I think after the point that you created the LogicalVolume you then need to put a filesystem on it (format as ext3). Then you would add something like the following to /etc/fstab
Code: [Select]
/dev/vg_data/raid5  /mnt/test    ext3    defaults 0 0
Lloyd

johndknm

lvm2 volume group and persistence
« Reply #2 on: September 18, 2006, 01:05:46 AM »
Sorry,  Thanks Lloyd,

Yes the drive is accessible and formated and is actually already being used as a target for a vmware guest os. I know its accessible and writeable/readable

I used mkfs -t ext3 to put a filesystem on it

/dev/vg_data/raid5 /mnt/test ext3 defaults 0 0    was *exactly* what I put in fstab.  I separated the 6 things with tabs rather than spaces.

Maybe I did something wrong in editing the fstab file?? (but that entry is there between reboots)

Offline ldkeen

  • *
  • 403
  • +0/-0
lvm2 volume group and persistence
« Reply #3 on: September 18, 2006, 02:32:36 AM »
John,
Quote from: "Lloyd"
Then you would add like the following to /etc/fstab
/dev/vg_data/raid    /mnt/test    ext3    defaults 0 0

Sorry my bad, not thinking. That fstab entry should be:
Code: [Select]
/dev/mapper/vg_data-raid5    /mnt/test     ext3    defaults  0  0
Lloyd

Offline ldkeen

  • *
  • 403
  • +0/-0
lvm2 volume group and persistence
« Reply #4 on: September 18, 2006, 04:13:53 AM »
John,
What is the output of:
Code: [Select]
#cat /etc/mtab
Do you have an entry in there similar to:
Code: [Select]
/dev/mapper/vg_data-raid5    /mnt/test     ext3    defaults  0  0
If not, put the above entry into /etc/mtab and leave the fstab entry to what it was prior. That is:
Code: [Select]
/dev/vg_data/raid5  /mnt/test    ext3    defaults 0 0
Lloyd

johndknm

lvm2 volume group and persistence
« Reply #5 on: September 18, 2006, 02:28:55 PM »
mtab has /dev/mapper/vg_data-raid5 /mnt/test ext3 rw 0 0 as predicted
fstab has /dev/vg_data/raid5 /mnt/test ext3 defaults 0 0

These were in place...it still doesnt get the vg activated

when I watch it boot, it finds md1 and vg main but doesnt find md3 or vg_data

what have i missed?

I still get joy at the cli with mdadm -A -s to get it to find the md3 disk and then vgscan and vgchange -y to get it to set up the volume group

I went into mdadm.conf and manually changed partitions to explicity enumerate the drives and this didnt change anything
DEVICE  /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde

I do notice that mdadm messages on boot dont find any disks ?? I am running this with a promise S300 tx4 controller.  Its obviously eventually finding the disks but could it be only after mdadm does its thing on boot?
dmesg seems to find the disks later (after the md section)

Ihave googled some more and found some references to the order the kernel modules load being relevant to detecting the sata_promise driver.  I wonder if thats the issue here. Iirc this is the module that finds the disks.  mdadm might load before the sata_promise driver does...ill have to check tonight.

Offline ldkeen

  • *
  • 403
  • +0/-0
lvm2 volume group and persistence
« Reply #6 on: September 19, 2006, 12:44:31 AM »
John,
From http://kbase.redhat.com/faq/FAQ_96_4842.shtm
Quote
Note that new volume groups must be activated using the lvm vgchange -a y command

Did you do the above command after create the new VolumeGroup?
Lloyd

johndknm

lvm2 volume group and persistence
« Reply #7 on: September 19, 2006, 12:50:22 AM »
To be honest this is now dating back some days and I dont know if i did or not.  I was reading and pursuing the vgchange -y vg_data thingo last night and looked in /dev/mapper and executed that command which placed a vg_data block device in /dev/mapper that wasnt there before but that didnt fix things either.  

vgchange -a y vg_data I have done
lvm vgchange -a y vg_data I havent tried

Offline ldkeen

  • *
  • 403
  • +0/-0
lvm2 volume group and persistence
« Reply #8 on: September 19, 2006, 01:33:35 AM »
Quote from: "johndknm"
Ihave googled some more and found some references to the order the kernel modules load being relevant to detecting the sata_promise driver.  I wonder if thats the issue here.

I think you're probably right on here. For your setup to work the kernel would need to support that controller and AFAICT you would need at least a 2.6.11 version of the kernel (It'll still work with modules but the module must load after the lvm stuff)). Looks like you have 2 options - wait for a newer kernel to appear in the updates or build the driver into the existing initrd (or just stick with mounting the thing after booting). If you want to try the latter I think I have some info around from previous setups.
Lloyd

johndknm

lvm2 volume group and persistence
« Reply #9 on: September 19, 2006, 12:16:59 PM »
Thanks for all your help.
I tried sticking a line scsi_hostadaptor sata_promise into the modules.conf on the first line in the hope that might load it first but no joy

Im on kernel 2.6.9

Next alternative is some commands in a boot script i guess --> off to google more.

mdadm -A -s
vgchange -a y vg_data
mount /dev/mapper/vg_data-raid5 /mnt/test

seems to fix access anyway

Offline ldkeen

  • *
  • 403
  • +0/-0
lvm2 volume group and persistence
« Reply #10 on: September 19, 2006, 01:59:07 PM »
John,
Here's something interesting I found
Quote
and rebuild the initrd so volumes will be available during boot. You shouldn't have to do this, but lvm won't detect devices unless this is done.

[root@erasrb2 ~]# cd /boot
[root@erasrb2 boot]# cp initrd-2.6.9-5.0.5.ELsmp.img initrd-2.6.9-5.0.5.ELsmp.img.bak
[root@erasrb2 boot]# rm initrd-2.6.9-5.0.5.ELsmp.img
[root@erasrb2 boot]# mkinitrd initrd-2.6.9-5.0.5.ELsmp.img 2.6.9-5.0.5.ELsmp

The above came from here http://narawiki.umiacs.umd.edu/twiki/bin/view/Lab/BrickNotes#RedhatEnterprise4
Just be careful, you might hose your system. Are you able to test?
Lloyd

johndknm

lvm2 volume group and persistence
« Reply #11 on: September 20, 2006, 12:06:49 AM »
Yeah.  System is in test phase and can be redone.  I might just ghost it up first though ;)

I went around the issue by /etc/e-smith/events/local  with a shell script to

mount -A -s
vgchange -a y vg_test
mount /dev/vg_data/raid5 /mnt/test

and that seems to work.

Thanks for your help.  This is all completely new territory for me.  Mind you having had (recently) a critical raid controller (lsi elite1600) fail to rebuild a raid one drive, and having to migrate to a new pci controller on a running windows 2k3 system (which all worked thank god) I must confess to having a new respect for the whole UUID tagging of disks and software management as a glorius thing!

Ideally for me sme7 would be able to be a turnkey install on a raid 1 set of boot drives and I could then drop in a raid 5 array hanging off a simple (non raid) sata controller easily.  Its been pretty easy to get the VG thing happening though.  Just not the bit about persistence ;)

johndknm

lvm2 volume group and persistence
« Reply #12 on: September 20, 2006, 01:15:14 PM »
Quote from: "ldkeen"
John,
Here's something interesting I found
Quote
and rebuild the initrd so volumes will be available during boot. You shouldn't have to do this, but lvm won't detect devices unless this is done.

[root@erasrb2 ~]# cd /boot
[root@erasrb2 boot]# cp initrd-2.6.9-5.0.5.ELsmp.img initrd-2.6.9-5.0.5.ELsmp.img.bak
[root@erasrb2 boot]# rm initrd-2.6.9-5.0.5.ELsmp.img
[root@erasrb2 boot]# mkinitrd initrd-2.6.9-5.0.5.ELsmp.img 2.6.9-5.0.5.ELsmp

The above came from here http://narawiki.umiacs.umd.edu/twiki/bin/view/Lab/BrickNotes#RedhatEnterprise4
Just be careful, you might hose your system. Are you able to test?
Lloyd



WORKED!  Thanks heaps. That seemed to be the problem!  
I added to modprobe.conf
scsihostadaptor1 sata_promise
Now it finds vg_data on boot

Tar muchly
JD

Best change yourself to legend status

Offline ldkeen

  • *
  • 403
  • +0/-0
lvm2 volume group and persistence
« Reply #13 on: September 25, 2006, 08:58:08 PM »
John,
Looks like the latest CentOS 4.4 (AKA RHEL 4 update 4) includes support for your controller. Here's a snip from the release notes:
Quote
Changes to Drivers and Hardware Support
This update includes bug fixes for a number of drivers. The more significant driver updates are listed below.

The following device drivers are added or updated in Red Hat Enterprise Linux 4 Update 4:

Added support in the LSI Logic MegaRAID Serial Attached SCSI HBA (megaraid_sas) driver for volumes larger than 2.2 Terabytes (TB).

Added support in the Adaptec aic7xxx and aic79xx family drivers for volumes larger than 2.2TB

Updated Emulex LightPulse Fibre Channel (lpfc) driver and added the LightPulse Fibre Channel IOCTL (lpfcdfc) management module

Added Linux hardware system monitoring (lm_sensors) drivers

Added Promise SATA300 TX4 controller support in the sata_promise driver

Added Marvell MV88SX5081 Serial ATA controller (sata_mv) driver

Added Promise SuperTrak RAID controller (shasta.ko) driver

Added Marvell Thumper Serial ATA driver

Added Multipath over Channel-to-channel (ctcmpc) driver

Added Demand Based Switching (DBS) driver

Added Remote Supervisor Adapter Service processor (ibmasm) driver

Added support for Broadcom BCM5751 Gigabit Ethernet adapter to the tg3 driver

Updated IBM Virtual Ethernet (ibmveth) driver to support device bonding

Added support for Realtek ALC260 and ALC262 sound devices to realtek driver

Updated Intel PRO/1000 (e1000) network driver

Updated Intel PRO/100 (e100) network driver

Updated IBM ServeRAID SCSI controller (ips) driver

Updated QLogic Fibre Channel HBA (qla2xxx) driver and added the QLogic Fibre Channel IOCTL (qioctlmod) management module

Updated IBM iSeries and IBM pSeries disk controller (ipr) driver

Updated Dell Systems Management Base (dcdbas) driver


You can read the full announcement here:
http://mirror.centos.org/centos/4.4/os/i386/RELEASE-NOTES-en.html
and here:
http://mirror.centos.org/centos/4/docs/html/release-notes/as-x86/RELEASE-NOTES-U4-en.html
Regards Lloyd

johndknm

lvm2 volume group and persistence
« Reply #14 on: September 26, 2006, 12:04:49 AM »
Thanks Lloyd
Its working like a treat as is with the mkinitrd alterations.

Now Ive got more newbie problems!  (a reflection of inexperience at symlinking directories).

Also, with the mkinitrd changes I have done, will a backup from the server-manager preserve all those things?

JD