Koozali.org: home of the SME Server
Obsolete Releases => SME 8.x Contribs => Topic started by: fpausp on April 30, 2012, 12:27:02 PM
-
Hi,
I try to build a 6-8TB Storage with SME8b7, I used 4x2TB (virtual) Disks on my KVM-Host and installed sme8 with the bootoption "sme nospare".
At the moment I have only 3,8TB free space:
df -h
Dateisystem Größe Benut Verf Ben% Eingehängt auf
/dev/mapper/main-root
3,8T 2,2G 3,6T 1% /
/dev/md1 99M 19M 76M 20% /boot
tmpfs 1,8G 0 1,8G 0% /dev/shm
My hdd-setup looks like this:
fdisk -lu
Platte /dev/vda: 2147.4 GByte, 2147483648000 Byte
255 heads, 63 sectors/track, 261083 cylinders, zusammen 4194304000 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Gerät boot. Anfang Ende Blöcke Id System
/dev/vda1 * 63 208844 104391 fd Linux raid autodetect
/dev/vda2 208845 4194298394 2097044775 fd Linux raid autodetect
Platte /dev/vdb: 2147.4 GByte, 2147483648000 Byte
255 heads, 63 sectors/track, 261083 cylinders, zusammen 4194304000 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Gerät boot. Anfang Ende Blöcke Id System
/dev/vdb1 * 63 208844 104391 fd Linux raid autodetect
/dev/vdb2 208845 4194298394 2097044775 fd Linux raid autodetect
Platte /dev/vdc: 2147.4 GByte, 2147483648000 Byte
255 heads, 63 sectors/track, 261083 cylinders, zusammen 4194304000 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Gerät boot. Anfang Ende Blöcke Id System
/dev/vdc1 * 63 208844 104391 fd Linux raid autodetect
/dev/vdc2 208845 4194298394 2097044775 fd Linux raid autodetect
Platte /dev/vdd: 2147.4 GByte, 2147483648000 Byte
255 heads, 63 sectors/track, 261083 cylinders, zusammen 4194304000 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Gerät boot. Anfang Ende Blöcke Id System
/dev/vdd1 * 63 208844 104391 fd Linux raid autodetect
/dev/vdd2 208845 4194298394 2097044775 fd Linux raid autodetect
Platte /dev/md2: 4294.7 GByte, 4294747095040 Byte
2 heads, 4 sectors/track, 1048522240 cylinders, zusammen 8388177920 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Festplatte /dev/md2 enthält keine gültige Partitionstabelle
Platte /dev/md1: 106 MByte, 106823680 Byte
2 heads, 4 sectors/track, 26080 cylinders, zusammen 208640 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Festplatte /dev/md1 enthält keine gültige Partitionstabelle
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 vda1[0] vdb1[1] vdc1[2] vdd1[3]
104320 blocks [4/4] [UUUU]
md2 : active raid5 vdd2[3] vdc2[4](S) vdb2[1] vda2[0]
4194088960 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.8% (18356480/2097044480) finish=2935.9min speed=11798K/sec
unused devices: <none>
Is it normal to get only 3,8TB ?
How can I get more space on a installed System ?
-
AFAIK you're using a RAID1 config, so you have half of physical storage available... and that's correct: ~8TB * 0,5 = ~4 TB.
If you like all space, I think you should try: "noraid nolvm"
I'm not sure why do not use nolvm... it's nice to be able to expand disk later... but with virtual disks maybe you can do it other way.
-
Hi jader,
AFAIK you're using a RAID1 config, so you have half of physical storage available... and that's correct: ~8TB * 0,5 = ~4 TB.
I think I use raid 5 on md2:
md2 : active raid5 vdd2[3] vdc2[4](S) vdb2[1] vda2[0]
4194088960 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.8% (18356480/2097044480) finish=2935.9min speed=11798K/sec
Do you know what the "(S)" on vdc2 means, is it a spare disk ?
-
Yes... sorry... it's true... you're on RAID1 on md1 and RAID5 with SPARE on md2.
and because you said it was installed using "nospare" I think this can be a bug.
BTW: why your RAID is rebuilding ? Wasn't it done at startup ?
Regards
Jáder
-
I tried "sme raid5 nospare", maybe this was not correct ?
I think I should wait of the sync-process is finished, I will then grow the array (md2) with:
mdadm --grow /dev/md2 --raid-devices=4
[code]
-
AFAIK this was correct.
And thi swiki page: http://wiki.contribs.org/Raid
say a 4 HDDs would be RAID5 + spare... so when you use nospare it should be a 4 HDDs RAID5... I think it's a bug... and should be reported.
Let developers say it's not a bug.
I don't think RAID5 will grow because it thinks there are a spare device!
-
I installed another machine with 4x32GB hdd, I used the same bootoptions "sme raid5 nospare". I forgot to say I used a modified sme8 iso (SME8A18V31128.iso), this is sme8 with asterisk ...
I got the same, md1 has 4 hdds and md2 has 3 plus 1 spare ...
I tried to grow with:
mdadm --grow /dev/md2 --raid-devices=4
And that looks good, at the moment it is growing ...
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 vda1[0] vdb1[1] vdc1[2] vdd1[3]
104320 blocks [4/4] [UUUU]
md2 : active raid5 vdd2[2] vdc2[3] vdb2[1] vda2[0]
66894336 blocks super 0.91 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
[=>...................] reshape = 7.5% (2530816/33447168) finish=236.1mi n speed=2179K/sec
unused devices: <none>
-
before installing it's a good thing to take a look at boot options (F2 - F3 etc)
in SME8, to create a no spare raid, you must use
sme raid=5 spares=0
syntax
AFAIR it's the same in the latest SME7 releases (7.5.1 for sure, just checked)
IMO wiki should be updated
-
I fired up another vm, my setup on this is 6x8GB hdd. I used "sme raid=5 sme spares=0 sme nolvm" as bootoptions.
The raid looks like:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 vdf1[5] vde1[4] vdd1[3] vdc1[2] vdb1[1] vda1[0]
104320 blocks [6/6] [UUUUUU]
md2 : active raid5 vdf2[5] vde2[4] vdd2[3] vdc2[2] vdb2[1] vda2[0]
4055040 blocks level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]
md3 : active raid5 vdf3[5] vde3[4] vdd3[3] vdc3[2] vdb3[1] vda3[0]
37350400 blocks level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
I got only 32GB ?
# df -h
Dateisystem Größe Benut Verf Ben% Eingehängt auf
/dev/md3 35G 1,7G 32G 5% /
/dev/md1 99M 12M 83M 13% /boot
tmpfs 990M 0 990M 0% /dev/shm
Raid in detail:
mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue May 1 11:48:10 2012
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Used Dev Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue May 1 12:13:44 2012
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
UUID : 2721bc9b:781c581e:2148e22c:e043485f
Events : 0.2
Number Major Minor RaidDevice State
0 253 1 0 active sync /dev/vda1
1 253 17 1 active sync /dev/vdb1
2 253 33 2 active sync /dev/vdc1
3 253 49 3 active sync /dev/vdd1
4 253 65 4 active sync /dev/vde1
5 253 81 5 active sync /dev/vdf1
mdadm --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Tue May 1 11:48:10 2012
Raid Level : raid5
Array Size : 4055040 (3.87 GiB 4.15 GB)
Used Dev Size : 811008 (792.13 MiB 830.47 MB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue May 1 12:01:59 2012
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 256K
UUID : 1065bcce:4337aec8:28119fd4:9c8a16c2
Events : 0.6
Number Major Minor RaidDevice State
0 253 2 0 active sync /dev/vda2
1 253 18 1 active sync /dev/vdb2
2 253 34 2 active sync /dev/vdc2
3 253 50 3 active sync /dev/vdd2
4 253 66 4 active sync /dev/vde2
5 253 82 5 active sync /dev/vdf2
mdadm --detail /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Tue May 1 11:48:13 2012
Raid Level : raid5
Array Size : 37350400 (35.62 GiB 38.25 GB)
Used Dev Size : 7470080 (7.12 GiB 7.65 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Tue May 1 12:25:52 2012
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 256K
UUID : cb45f02d:8634e6a0:ff716c68:6af753b4
Events : 0.312
Number Major Minor RaidDevice State
0 253 3 0 active sync /dev/vda3
1 253 19 1 active sync /dev/vdb3
2 253 35 2 active sync /dev/vdc3
3 253 51 3 active sync /dev/vdd3
4 253 67 4 active sync /dev/vde3
5 253 83 5 active sync /dev/vdf3
These are the mountpoints:
mount
/dev/md3 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
I got only 32GB, is that correct ?
-
how much memory has that VM?
anyway, teorically 8x(6-1)=40 GB is the array capacity
-
Yes but df -h says 32G on md3, normaly (with lvm) I have only md1 and md2.
Do you know something about md3 ?
-
You have almost 40GB: 32GB on md3 + 4GB on md2 + 1GB on md1.
i think everything it's ok... but AFAIK you could start it as "sme nolvm raid=5 spare=0" without repeat "sme"
-
Yes, the last question is with or without "nolvm" under a vm ?
I think I will use 6x2TB for the production system (6hdd are the maximum what I can use on the virtual-host (proxmox) and ~2TB is the biggest per hdd what sme can take).
regards
fpausp