Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: mhult on July 27, 2010, 03:03:05 PM
-
Hi!
I have been an SME user for ten years, always running software raid 5 with excellent reliability and performance.
Now I am taking it up a notch. I want to install SME Server on a vmware ESXi 4.1 box with hardware raid. I have all the storage in the physical box available to ESXi as a single datastore with about 5.5 TB, and I want to use 4 TB for my SME Server virtual machine. Unfortunately, ESXi will not allow virtual disks larger than 2 TB. Fine, I'll just give SME Server two 2 TB disks, and the SME installer will use lvm or something to treat both disks as a single (close to) 4 TB file system. Nope! The only options available when starting the installer seem to be raid 1, 5, 6 or no raid. Specifying no raid will use only one disk (2 TB) with the other disk left unused.
So what are my options? This is what I was able to come up with:
1. Install SME on one disk. Manually format the other disk and mount it in an ibay (easy but somewhat inflexible).
2. Install SME on one disk. Configure lvm to use the other disk and grow the file system (I don't know if this is possible, and I know nothing about lvm).
3. Somehow tweak the installer to use both disks and create a large file system in /dev/mapper/main-root (I don't know how to do this).
Is there anyone out there who knows the best way to solve this?
Thanks,
Mattias
-
install SME on a smal virtual disk then follow the AddExtraHardDisk (http://wiki.contribs.org/AddExtraHardDisk) howto with the two big ones
btw, if really Esx 4.X doesn't support virtual hd bigger than 2TB, it's pretty useless..
if you are only testing, take a look at proxmox VE (http://www.proxmox.com/products/proxmox-ve)
-
I was able to solve this according to alternative 2 in my original post - use lvm to grow the root file system with a new disk. Here is a brief write-up of how I did this. It might be useful to someone in a similar situation - or anyone wishing to add a disk to an already running SME Server.
The scenario:
You want to install SME Server as a virtual machine. The host has hardware raid, so you don't want to use the (otherwise excellent) built-in software raid in SME server (it would only waste storage area and possibly also read/write performance). You want the SME Server to have more than 2 TB available storage (in my case 4 TB).
The problem:
Current virtualisation platforms (VMware vSphere, Microsoft Hyper-V, Citrix Xen) cannot provide virtual hard disks larger than 2 TB. I think this is due to some limitation in SCSI (can someone confirm this?). The SME installer cannot utilise multiple hard disk, except for redundancy (RAID 1, 5 & 6). Otherwise a software JBOD or even a RAID 0 could have been an alternative.
The solution:
Install SME server on one disk. Later add another disk and use lvm and resize2fs to grow the root file system to utilise both disks as a single device.
The details:
1. Create a virtual machine with one 2 TB virtual hard disk. Since this virtual disk resides on a hardware raid array, it is already redundant (unless the array is a RAID 0).
2. Install SME Server with the command "sme raid=none". IMPORTANT! If you don't specify "raid=none" the SME installer will create a RAID 1 device with only one disk, and the whole procedure will be somewhat more complicated. You will then have another abstraction layer in addition to the ones assumed here (virtual hard disk, partition, physical volume, logical volume, file system).
3. When the installation is complete and your new SME server is up and running, power down the SME Server and add a second 2 TB disk to the virtual machine.
4. Boot the SME Server and login as root.
5. Create a partition on the new disk (/dev/sdb) of type "8e" (lvm) using fdisk.
6. Add the new partition as a physical volume using the command "pvcreate /dev/sdb1".
7. Add the new physical volume to the volume group "main" using "vgextend main /dev/sdb1".
8. Extend the logical volume "root" using the command "lvextend -L3999G /dev/main/root". The number "3999" should be adjusted to reflect the combined size of the /dev/sda2 and /dev/sdb1 partitions in your system. If you give a number that is too large you will simply get an error message and can retry until you specify a size that fits on your partitions.
9. Power down the SME Server and reboot using the installation CD to the rescue system by typing "sme rescue" at the boot prompt.
10. Choose "skip" when offered to mount the SME system.
11. Activate the logical volume using the command "lvm vgchange -a y main"
12. Extend the file system using the command "resize2fs /dev/main/root". You may have to run "e2fsck -f /dev/main/root" first. These commands take some time to complete, especially if your disks are large and/or slow.
13. Type "exit" to reboot and start SME Server normally.
14. Check the size of the root file system by typing "df -h" to verify that you have been successful.
The reason why steps 9 - 11 and 13 are necessary is that the version of resize2fs that comes with SME Server does not support online resizing of ext3 file systems. Supposedly resize2fs version 1.39 can do this. SME Server 7.51 comes with version 1.35. SME Server 8.0 beta5 comes with 1.39, but I have not had the time to test if online resizing works there yet.
Any feedback to this write-up is welcome!
-
2. Install SME Server with the command "sme raid=none".
AFAIR it should be "sme noraid" but I don't know if both are supported
The reason why steps 8 - 10 and 12 are necessary is that the version of resize2fs that comes with SME Server does not support online resizing of ext3 file systems. Supposedly resize2fs version 1.39 can do this. SME Server 7.51 comes with version 1.39. SME Server 8.0 beta5 comes with 1.39, but I have not had the time to test if online resizing works there yet.
again, AFAIK it should.. once I've followed this (http://wiki.contribs.org/Raid#Upgrading_the_Hard_Drive_Size) howto and it worked.. maybe something is different from 7 to 8?
Any feedback to this write-up is welcome!
please get a wiki account and write this howto there, thank you
-
The correct way to do this is with Vmware extents, can't remember exactly off the top of my head but I think you can configure a volume of up to 64TB (you probably want ot double-check that)
Cheers,
Josh
-
Extents are used to grow datastores, i.e the virtual file systems used by ESXi to store virtual machines, virtual hard disks, iso images and any other files used by the host. As you mention, by using extents to grow a datastore across multiple LUNs (for example local SCSI disks, NFS shares or raid arrays in SANs), a virtual file system can be 64 TB. Remember, this is the virtual file system used by the host. However, the maximum file size possible on datastores are 2 TB. Virtual hard disks available to guests (virtual machines) reside in single files, and can not be extended to span across multiple files in the datastore. Thus, virtual hard disks are limited to 2 TB. As far as I know, the only way to make file systems in guests larger than 2 TB is to give the guest multiple virtual hard disks, and then use the capabilities of the guest OS (for example lvm in linux) to create a larger file system by combining those virtual hard disks.
-
more info here (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003565)
-
apologies to the op, misread the question. Yes indeed with the 8MB block size the maximum vmdk file size is 2TB.
-
more info here (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003565)
This was interesting. Apparently the older VMFS-2 format supports larger virtual disks than VMFS-3.