Koozali.org: home of the SME Server

Legacy Forums => General Discussion (Legacy) => Topic started by: Dean Larkin on July 10, 2002, 09:10:35 PM

Title: Raid questions
Post by: Dean Larkin on July 10, 2002, 09:10:35 PM
First, let me apologize in advance for the length of this post.

I installed v5.1.2 with the software raid 1 option. I have two 60gb IDE disks (not the same model number, but the same size and manufacturer; one is ATA100 and one ATA133).

After install, the partition tables looked like this:
  Disk /dev/hda: 255 heads, 63 sectors, 7476 cylinders
  Units = cylinders of 16065 * 512 bytes
     Device Boot    Start       End    Blocks   Id  System
  /dev/hda1   *         1        33    265041   fd  Linux raid autodetect
  /dev/hda2            34      7476  59785897+   5  Extended
  /dev/hda5            34        66    265041   fd  Linux raid autodetect
  /dev/hda6            67        68     16033+  fd  Linux raid autodetect
  /dev/hda7            69        70     16033+  fd  Linux raid autodetect
  /dev/hda8            71      3774  29752348+  fd  Linux raid autodetect

  Disk /dev/hdc: 255 heads, 63 sectors, 7299 cylinders
  Units = cylinders of 16065 * 512 bytes
     Device Boot    Start       End    Blocks   Id  System
  /dev/hdc1   *         1      3704  29752348+  fd  Linux raid autodetect

Raidtab looked like this:
  [root@server /root]# cat /etc/raidtab
  raiddev             /dev/md0
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda6
      raid-disk     0
      device          /dev/hda7
      raid-disk     1
  raiddev             /dev/md1
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda8
      raid-disk     0
      device          /dev/hdc1
      raid-disk     1
  raiddev             /dev/md2
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda1
      raid-disk     0
      device          /dev/hda5
      raid-disk     1

And finally, fstab looked like this:
  [root@server /root]# cat /etc/fstab
  /dev/md1     /             ext2    usrquota,grpquota 1 1
  /dev/md0     /boot         ext2    defaults          1 2
  /dev/cdrom   /mnt/cdrom    iso9660 noauto,owner,ro   0 0
  /dev/fd0     /mnt/floppy   auto    noauto,owner      0 0
  none         /proc         proc    defaults          0 0
  none         /dev/pts      devpts  gid=5,mode=620    0 0
  /dev/md2     swap          swap    defaults          0 0


First question, what is the purpose of raid if the mirrored partitions are on the same drive? I thought the benefit of raid 1 was the ability to keep working if a drive failed. In this case, the swap and /boot partitions are mirrored on the same physical disk. I can't see how I could keep working if the primary drive failed.

The second question is, why did e-smith use only half my drive space? I manually added new partitions, joined them in a raid array, and mounted them. My partition tables now look like this:

  Disk /dev/hda: 255 heads, 63 sectors, 7476 cylinders
  Units = cylinders of 16065 * 512 bytes
     Device Boot    Start       End    Blocks   Id  System
  /dev/hda1   *         1        33    265041   fd  Linux raid autodetect
  /dev/hda2            34      7476  59785897+   5  Extended
  /dev/hda5            34        66    265041   fd  Linux raid autodetect
  /dev/hda6            67        68     16033+  fd  Linux raid autodetect
  /dev/hda7            69        70     16033+  fd  Linux raid autodetect
  /dev/hda8            71      3774  29752348+  fd  Linux raid autodetect
  /dev/hda9          3775      7369  28876806   fd  Linux raid autodetect


  Disk /dev/hdc: 255 heads, 63 sectors, 7299 cylinders
  Units = cylinders of 16065 * 512 bytes
     Device Boot    Start       End    Blocks   Id  System
  /dev/hdc1   *         1      3704  29752348+  fd  Linux raid autodetect
  /dev/hdc2          3705      7299  28876837+  fd  Linux raid autodetect
 
Raidtab like this:
  [root@server /root]# cat /etc/raidtab
  raiddev             /dev/md0
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda6
      raid-disk     0
      device          /dev/hda7
      raid-disk     1
  raiddev             /dev/md1
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda8
      raid-disk     0
      device          /dev/hdc1
      raid-disk     1
  raiddev             /dev/md2
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda1
      raid-disk     0
      device          /dev/hda5
      raid-disk     1
  raiddev             /dev/md3
  raid-level                  1
  nr-raid-disks               2
  chunk-size                  64k
  persistent-superblock       1
  #nr-spare-disks     0
      device          /dev/hda9
      raid-disk     0
      device          /dev/hdc2
      raid-disk     1

And fstab like this:
  [root@server /root]# cat /etc/fstab
  /dev/md1     /             ext2    usrquota,grpquota 1 1
  /dev/md3     /mnt/md3      ext2    defaults          1 1
  /dev/md0     /boot         ext2    defaults          1 2
  /dev/cdrom   /mnt/cdrom    iso9660 noauto,owner,ro   0 0
  /dev/fd0     /mnt/floppy   auto    noauto,owner      0 0
  none         /proc         proc    defaults          0 0
  none         /dev/pts      devpts  gid=5,mode=620    0 0
  /dev/md2     swap          swap    defaults          0 0


While this seems to work, it is not ideal for a couple of reasons (in addition to the swap and /boot partitions being mirrored on the same physical drive...). Firstly, the extra setup steps required should the system ever need installed (or upgraded - what would an upgrade to 5.5 do to my existing disk structures, since they have been changed manually since I installed?). Secondly, I would rather have one large partition for user files rather than two separate ones. Ideally, the root partition would be relatively small (say around 5gb), with the remaining space in a separate partition for user files. This would prevent the root partition from filling up and crashing the server - though I would be happy with just a single large partition.

So, last question; is there any way to modify the partition structures once the system has been installed - not add new partitions, but move the mirrored swap and /boot partitions to the second drive, and enlarge the root partition to use all the disk space?
Title: Re: Raid questions
Post by: ClaudioG on July 11, 2002, 09:47:18 PM
Hi, IMHO the problem is:

> Disk /dev/hda: 255 heads, 63 sectors, 7476 cylinders
> Disk /dev/hdc: 255 heads, 63 sectors, 7299 cylinders

I have some problems when the # of cylinders are different.

You can change the number with fdisk.

Regards,
ClaudioG
Title: Re: Raid questions
Post by: marc on July 12, 2002, 08:14:11 AM
Both hard disks must be the exact same drive: heads/sectors/cylinders

Or else it will not work.

Cheers

ClaudioG wrote:
>
> Hi, IMHO the problem is:
>
> > Disk /dev/hda: 255 heads, 63 sectors, 7476 cylinders
> > Disk /dev/hdc: 255 heads, 63 sectors, 7299 cylinders
>
> I have some problems when the # of cylinders are different.
>
> You can change the number with fdisk.
>
> Regards,
> ClaudioG
Title: Re: Raid questions
Post by: leroy on July 22, 2002, 02:34:08 AM
According to the 5.5 docs: "To enable software RAID1 support, you must first have two disks that are either the same size or capable of having partitions of the same size."

Is this not correct?