Obsolete Releases > SME 8.x Contribs

SME8 Storage under KVM ?

<< < (2/3) > >>

Jáder:
AFAIK this was correct.
And thi swiki page: http://wiki.contribs.org/Raid
say a 4 HDDs would be RAID5 + spare... so when you use nospare it should be a 4 HDDs RAID5... I think it's a bug... and should be reported.
Let developers say it's not a bug.

I don't think RAID5 will grow because it thinks there are a spare device!

fpausp:
I installed another machine with 4x32GB hdd, I used the same bootoptions "sme raid5 nospare". I forgot to say I used a modified sme8 iso (SME8A18V31128.iso), this is sme8 with asterisk ...

I got the same, md1 has 4 hdds and md2 has 3 plus 1 spare ...

I tried to grow with:


--- Code: ---mdadm --grow /dev/md2 --raid-devices=4

--- End code ---

And that looks good, at the moment it is growing ...


--- Code: ---cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 vda1[0] vdb1[1] vdc1[2] vdd1[3]
      104320 blocks [4/4] [UUUU]

md2 : active raid5 vdd2[2] vdc2[3] vdb2[1] vda2[0]
      66894336 blocks super 0.91 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      [=>...................]  reshape =  7.5% (2530816/33447168) finish=236.1mi                                                                                  n speed=2179K/sec

unused devices: <none>

--- End code ---



Stefano:
before installing it's a good thing to take a look at boot options (F2 - F3 etc)

in SME8, to create a no spare raid, you must use

--- Code: ---sme raid=5 spares=0

--- End code ---
syntax

AFAIR it's the same in the latest SME7 releases (7.5.1 for sure, just checked)

IMO wiki should be updated

fpausp:
I fired up another vm, my setup on this is 6x8GB hdd. I used "sme raid=5 sme spares=0 sme nolvm" as bootoptions.


The raid looks like:


--- Code: ---cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 vdf1[5] vde1[4] vdd1[3] vdc1[2] vdb1[1] vda1[0]
      104320 blocks [6/6] [UUUUUU]

md2 : active raid5 vdf2[5] vde2[4] vdd2[3] vdc2[2] vdb2[1] vda2[0]
      4055040 blocks level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]

md3 : active raid5 vdf3[5] vde3[4] vdd3[3] vdc3[2] vdb3[1] vda3[0]
      37350400 blocks level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]

unused devices: <none>

--- End code ---



I got only 32GB ?


--- Code: ---# df -h

Dateisystem          Größe Benut  Verf Ben% Eingehängt auf
/dev/md3               35G  1,7G   32G   5% /
/dev/md1               99M   12M   83M  13% /boot
tmpfs                 990M     0  990M   0% /dev/shm

--- End code ---



Raid in detail:


--- Code: ---mdadm --detail /dev/md1

/dev/md1:
        Version : 0.90
  Creation Time : Tue May  1 11:48:10 2012
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue May  1 12:13:44 2012
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

           UUID : 2721bc9b:781c581e:2148e22c:e043485f
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0     253        1        0      active sync   /dev/vda1
       1     253       17        1      active sync   /dev/vdb1
       2     253       33        2      active sync   /dev/vdc1
       3     253       49        3      active sync   /dev/vdd1
       4     253       65        4      active sync   /dev/vde1
       5     253       81        5      active sync   /dev/vdf1



mdadm --detail /dev/md2

/dev/md2:
        Version : 0.90
  Creation Time : Tue May  1 11:48:10 2012
     Raid Level : raid5
     Array Size : 4055040 (3.87 GiB 4.15 GB)
  Used Dev Size : 811008 (792.13 MiB 830.47 MB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Tue May  1 12:01:59 2012
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 256K

           UUID : 1065bcce:4337aec8:28119fd4:9c8a16c2
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0     253        2        0      active sync   /dev/vda2
       1     253       18        1      active sync   /dev/vdb2
       2     253       34        2      active sync   /dev/vdc2
       3     253       50        3      active sync   /dev/vdd2
       4     253       66        4      active sync   /dev/vde2
       5     253       82        5      active sync   /dev/vdf2



mdadm --detail /dev/md3

/dev/md3:
        Version : 0.90
  Creation Time : Tue May  1 11:48:13 2012
     Raid Level : raid5
     Array Size : 37350400 (35.62 GiB 38.25 GB)
  Used Dev Size : 7470080 (7.12 GiB 7.65 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Tue May  1 12:25:52 2012
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 256K

           UUID : cb45f02d:8634e6a0:ff716c68:6af753b4
         Events : 0.312

    Number   Major   Minor   RaidDevice State
       0     253        3        0      active sync   /dev/vda3
       1     253       19        1      active sync   /dev/vdb3
       2     253       35        2      active sync   /dev/vdc3
       3     253       51        3      active sync   /dev/vdd3
       4     253       67        4      active sync   /dev/vde3
       5     253       83        5      active sync   /dev/vdf3

--- End code ---


These are the mountpoints:


--- Code: ---mount
/dev/md3 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

--- End code ---



I got only 32GB, is that correct ?

Stefano:
how much memory has that VM?

anyway, teorically 8x(6-1)=40 GB is the array capacity

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version