Koozali.org: home of the SME Server
Contribs.org Forums => Koozali SME Server 10.x => Topic started by: cpits on August 29, 2021, 05:20:06 AM
-
Hello all,
Can anyone tell me if this is possible short of re-installing the OS?
As stated in the subject line, this server was built to replace an ageing Zentyal developer build running as our production server. After running updates for Zentyal the server software borked leaving minimal access. Thus the new build!
We have presently several hundred Gig of data on the old server which has to migrate across so we need to have the storage boosted for this SME Server 10!
So any help/advice would be appreciated.
Sincerely,
Trudi
-
Not exactly sure what you have setup, subject doesnt explain for me..
Do you have a working sme10 install that currently has a raid1 setup using two NVMe ssd, of unknown size, with OS and data
What you want to do is add 4 2TB HDs (ssd or metal) in a raid5/6 that will have OS and data or keep NVMe and add drives as data drives only.
Can you flesh out what you have and what you want
-
Hi TerryF,
OK... SME 10 installed on hardware raid1 (dual nvme 500Gb SSD's), we have 4x 2Tb SATA HDD's we wish to expand the raid to raid5 making use of all space available however the Anaconda installer is a little different to what I remember (when installing the OS - SME9.x gave you the option to select a raid5 environment on install).
If we can have the 4x drives as a data source and accessible that should work as well.
Spending so much time working on our clients systems we tend to get a bit lapse on our own systems... (forehead slap)!
Any information will be helpful, thanks. :)
-
Leave sme10 on the nvme ssd's with the operating system as is. Build a raid 5/6 array from the command line with the sata hdd's and then just mount the array in an empty shared folder or ibay.
Is it worth the complexity and stuffing around to try and combine the ssd's and hdd in the one array?
-
Leave sme10 on the nvme ssd's with the operating system as is. Build a raid 5/6 array from the command line with the sata hdd's and then just mount the array in an empty shared folder or ibay.
Its what I would do and do use, mount the data drive/drives using the fstab file eg
# UUID=566cf197-313c-4109-95f7-155d4913a747 /home/e-smith/files/shares ext3
This then will write the contents of /home/e-smith/files/shares to the HD with the UUID shown, prefer UUID to names or /dev/sd? less chance of a stuff up
There is a wiki writeup about using fstab and if memory serves on creating a raid for data, needs some reading and research, I have not done this in past but sure is possible
Is it worth the complexity and stuffing around to try and combine the ssd's and hdd in the one array?
You will have an issue with sizes, smallest being the SSD, 500gb, this does not change for sme10 same as in sme9/8/7 :-) all must be same size.
I am sure one of the more experienced will drop a note or two on above..
and of course if you have the server hardware you could do it all in that
Added: running up a VM to investigate, 2HDs for OS and 4 for raid5 data, mounted by fstab
-
Google suggests that it may be possible to convert from raid 1 to raid 5 (didn't look for raid 6).
https://serverfault.com/questions/830708/raid-1-to-raid-5-using-mdadm/870710 (https://serverfault.com/questions/830708/raid-1-to-raid-5-using-mdadm/870710)
https://dev.to/madmannew/how-to-convert-lv-or-md-raid1-and-0-into-raid5-without-losing-data-3mfl (https://dev.to/madmannew/how-to-convert-lv-or-md-raid1-and-0-into-raid5-without-losing-data-3mfl)
It was a superficial look and I haven't looked at issues with drive sizes and mixed ssd/hdd.
I've converted a raid 1 sme9.2 to a non-raid single drive, mainly to avoid the time to backup/restore data, shrink the drive and simplify moving it to a vm, but I wouldn't make a habit of it. Conversions always seem to leave something not quite right and/or defer an issue until you need to redo it to fix it properly. ymmv
To me building computers should be similar to how I write software, just think about when it needs to be fixed in 5 years time and your once 'clever' solution is so convoluted that you can't remember what or why you did it that way. Keep it simple.
-
Only had a brief read & will look more later but really, don't mix drives, particularly drive type but also size. Asking for trouble IMHO.
Also read my other post here about Raid or Ransid.
Also read about Raid 5 and large drives and MTBF.....
Sage gives sage advice!! Raid 1 + Raid5/6.
-
To me building computers should be similar to how I write software, just think about when it needs to be fixed in 5 years time and your once 'clever' solution is so convoluted that you can't remember what or why you did it that way. Keep it simple.
So, KISS principle
Keep Raid1 of two 500gb SSDs
Create a Raid 5/6 of 4 2TB HDs, mount the result using fstab to # /home/e-smith/files/shares for data.
iBays remain on Raid 1 SSDs
makes it simple and easy to upgrade/migrate whatever down the track, just needs a little research and knowledge to do the raid 5/6 setup...there are some excellent resources and How Tos online
-
same advice here, keep it stupid and simple.
keep your current raid 1 for system.
build a raid 5/6 with your new disks aside from that and then mount it wherever you want. (you can even mount it on /home/e-smith/files if you want and move old folders there beforehand.
this will ease your work on next system failure or upgrade: disconnect your data drive, migrate to next majore release and plug back your data.
If it had been done this way years ago it would have been as easy as that to unplug data drive from old zentyal and connect them back with the raid setup to access the data back on the SME 10.
-
Just as an exercise in 'how to'
Created a VM with two drives and installed sme10, ended with a Raid1 by default, system up and running fully yum updated..installed smeservr-shared-folders
Added 4 drives and configured as per wiki and online resources,
Ended with a VM with a 2xHD Raid1 with sme10 OS and a 4xHD Raid6 mounted in /homee-smith/files/shares
fstab edited as wiki and online articles show, issue with emergency mode was solved as described in online article
Not inluding research for issue on boot, all up maybe max 2hrs from install of sme10 to all up and done..issue on boot now resolved so easy peasy
Start here and take time to understand what you are doing
https://wiki.koozali.org/Raid
https://wiki.koozali.org/AddExtraHardDisk
https://www.tecmint.com/create-raid-5-in-linux/ (sme10 uses Raid 6 not 5)
http://www.chriscouture.com/centos-7-emergency-mode-on-bootreboot-with-raid-mounted/
My fstab
# /etc/fstab
# Created by anaconda on Mon Aug 30 16:33:28 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/main-root / xfs uquota,gquota 0 0
UUID=35b757aa-2d73-4ff5-beaf-679e7facefc1 /boot xfs defaults 0 0
/dev/mapper/main-swap swap swap defaults 0 0
#/dev/md11 /home/e-smith/files/shares ext4 _netdev 0 0
/dev/md11 /home/e-smith/files/shares ext4 noauto,x-systemd.automount 0 0
or using UUID
#UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares ext4 _netdev 0 0
UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares ext4 noauto,x-systemd.automount 0 0
[root@smeraid6 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md11 : active raid6 sdb1[0] sdc1[1] sdd1[2] sde1[3]
41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid1 sda2[0] sdf2[1]
15206400 blocks super 1.2 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md0 : active raid1 sda1[0] sdf1[1]
510976 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
Have fun :-)
-
UUID=095a3567:d939b7f7:4df1c433:99c3c7d8 /home/e-smith/files/shares ext4 _netdev 0 0
For those of you who will be TL;DR the key to this is _netdev
The links will explain why in more detail. Note this could be a LVM, ext4 or any other type of partiton.
As Jean said:
The alternative to netdev is to add the raid information to grub.
Unfortunately since Centos7 you have to declare every raid and lvm needed at boot.
So the simple solution here is as per Terrys code above.
OK Mr Fage. Fun over. Get back to testing my hacks :lol:
-
Thank you everyone for your prompt responses in relation to this & thank you TerryF!
Hopefully will be doing this over the coming weekend as my partner & I have a number of projects on for our clients which take priority presently,
Another small issue creeping up is the on-board nic seems to be going to sleep so I will have to check the power management in the bios for the motherboard. Gigabyte B450 Aorus Elite with a Ryzen 5 1600.
Getting frustrating waking in the morning to check our email and network timeout's!
Kindest regards,
Trudi
Just as an exercise in 'how to'
Created a VM with two drives and installed sme10, ended with a Raid1 by default, system up and running fully yum updated..installed smeservr-shared-folders
Added 4 drives and configured as per wiki and online resources,
Ended with a VM with a 2xHD Raid1 with sme10 OS and a 4xHD Raid6 mounted in /homee-smith/files/shares
fstab edited as wiki and online articles show, issue with emergency mode was solved as described in online article
Not inluding research for issue on boot, all up maybe max 2hrs from install of sme10 to all up and done..issue on boot now resolved so easy peasy
Start here and take time to understand what you are doing
https://wiki.koozali.org/Raid
https://wiki.koozali.org/AddExtraHardDisk
https://www.tecmint.com/create-raid-5-in-linux/ (sme10 uses Raid 6 not 5)
http://www.chriscouture.com/centos-7-emergency-mode-on-bootreboot-with-raid-mounted/
My fstab
# /etc/fstab
# Created by anaconda on Mon Aug 30 16:33:28 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/main-root / xfs uquota,gquota 0 0
UUID=35b757aa-2d73-4ff5-beaf-679e7facefc1 /boot xfs defaults 0 0
/dev/mapper/main-swap swap swap defaults 0 0
#/dev/md11 /home/e-smith/files/shares ext4 _netdev 0 0
/dev/md11 /home/e-smith/files/shares ext4 noauto,x-systemd.automount 0 0
or using UUID
#UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares ext4 _netdev 0 0
UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares ext4 noauto,x-systemd.automount 0 0
[root@smeraid6 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md11 : active raid6 sdb1[0] sdc1[1] sdd1[2] sde1[3]
41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid1 sda2[0] sdf2[1]
15206400 blocks super 1.2 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md0 : active raid1 sda1[0] sdf1[1]
510976 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
Have fun :-)
-
for the ethernet port a quick google search will lead you to a lot of people reporting this issue with this mobo whatever the os they are using.
some reddit reports the need to update the bios.
your fastest fix might be to buy a supported ethernet card and pluging it in.
for easier reference please provide the output of
lspci| grep -i Eth
Generally speaking it is good to check Red Hat website for supported hardware before buying something. Also another clue this might go wrong with the hardware is no Linux support from the Gygabite website for this mobo.
-
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 16)
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)
for the ethernet port a quick google search will lead you to a lot of people reporting this issue with this mobo whatever the os they are using.
some reddit reports the need to update the bios.
your fastest fix might be to buy a supported ethernet card and pluging it in.
for easier reference please provide the output of
lspci| grep -i Eth
Generally speaking it is good to check Red Hat website for supported hardware before buying something. Also another clue this might go wrong with the hardware is no Linux support from the Gygabite website for this mobo.
-
8168 realtek you will at least need to install elrepo kmod for it and not using the 8169 driver.
there are a few posts in the forum about that.
and again looking at google maybe also need to update your bios.
-
I have run the bios update, from version 52 to 62. Seems quite stable presently so we will be monitoring over the next day or 2.
8168 realtek you will at least need to install elrepo kmod for it and not using the 8169 driver.
there are a few posts in the forum about that.
and again looking at google maybe also need to update your bios.
-
For those of you who will be TL;DR the key to this is _netdev
The links will explain why in more detail. Note this could be a LVM, ext4 or any other type of partiton.
As Jean said:
So the simple solution here is as per Terrys code above.
OK Mr Fage. Fun over. Get back to testing my hacks :lol:
The way I read Terry's code the _netdev line in /etc/fstab is actually commented out. _netdev is needed for network filesystems to ensure the network is up before trying to mount.
-
The way I read Terry's code the _netdev line in /etc/fstab is actually commented out. _netdev is needed for network filesystems to ensure the network is up before trying to mount.
Theres is a issue with the Raid6 being processed before the network is up, causes the OS to derop into emergency mode, see the links I posted, there are two methods to overcome..
adding _netdev to the fstab entry OR noauto,x-systemd.automount I have cjosen to use the latter, see the references listed..
#/dev/md11 /home/e-smith/files/shares ext4 _netdev 0 0
/dev/md11 /home/e-smith/files/shares ext4 noauto,x-systemd.automount 0 0
or using UUID
#UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares ext4 _netdev 0 0
UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares ext4 noauto,x-systemd.automount 0 0
-
Got it.
Raid 1 on a vm seemed to be happy without either. But have used the systemd variant and it works fine.
-
yep, default install Raid 1 or even default install with 4 or more disks will get you a Raid6 all fine, its when the second raid is added the issue arose, good fun :-) totally new to me...
-
Of course we all know what happens now :-) order placed for some new largish hard drives :-) home server already on ssd but just a single data drive .. just need to formulate the $$ excuse for Mrs TerryF :-)
-
with Centos 6 you had to rebuild the initram to avoid this issue https://forums.centos.org/viewtopic.php?t=65257
_netdev or x-systemd.automount will just lead to prevent the mount of disk during the init.
because your init need to load the needed modules for raid 6. also you had to declare your raid at grub. rd_MD_UUID=
for Centos 7, this needs some changes too (about the same but different):
https://forums.centos.org/viewtopic.php?t=65655
https://forums.centos.org/viewtopic.php?t=54901
rd.md.uuid=$youruuid or rd.auto=1 in GRUB_CMDLINE_LINUX line of /etc/default/grub
and then rebuild grub.conf and rebuild initramfs
#dracut --regenerate-all -fv --mdadmconf --fstab --add=mdraid --add-drivers="raid1 raid10 raid456"
then as long as adhoc module is added to the module to load, next initramfs reneration on update should add it to new kernel initramfs
-
UPDATE:
Recently our area suffered a blackout, whilst connected to a UPS the system blew a number of resistors near the cpu socket and we have the motherboard being replaced under warranty.
We've had to purchase a new board in the meantime to get our server back online, so hopefully we get this raid config underway!
I have run the bios update, from version 52 to 62. Seems quite stable presently so we will be monitoring over the next day or 2.
-
UPDATE:
Recently our area suffered a blackout, whilst connected to a UPS the system blew a number of resistors near the cpu socket and we have the motherboard being replaced under warranty.
Damn... :shock: :shock: :shock: