Koozali.org: home of the SME Server

Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6

Offline cpits

  • 10
  • +0/-0
Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« on: August 29, 2021, 05:20:06 AM »
Hello all,
Can anyone tell me if this is possible short of re-installing the OS?
As stated in the subject line, this server was built to replace an ageing Zentyal developer build running as our production server. After running updates for Zentyal the server software borked leaving minimal access. Thus the new build!

We have presently several hundred Gig of data on the old server which has to migrate across so we need to have the storage boosted for this SME Server 10!

So any help/advice would be appreciated.

Sincerely,
Trudi

Offline TerryF

  • grumpy old man
  • *
  • 1,821
  • +6/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #1 on: August 29, 2021, 07:53:47 AM »
Not exactly sure what you have setup, subject doesnt explain for me..

Do you have a working sme10 install that currently has a raid1 setup using two  NVMe ssd, of unknown size, with OS and data

What you want to do is add 4 2TB HDs (ssd or metal) in a raid5/6 that will have OS and data or keep NVMe and add drives as data drives only.

Can you flesh out what you have and what you want
--
qui scribit bis legit

Offline cpits

  • 10
  • +0/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #2 on: August 29, 2021, 08:15:08 AM »
Hi TerryF,
OK... SME 10 installed on hardware raid1 (dual nvme 500Gb SSD's), we have 4x 2Tb SATA HDD's we wish to expand the raid to raid5 making use of all space available however the Anaconda installer is a little different to what I remember (when installing the OS - SME9.x gave you the option to select a raid5 environment on install).

If we can have the 4x drives as a data source and accessible that should work as well.

Spending so much time working on our clients systems we tend to get a bit lapse on our own systems... (forehead slap)!

Any information will be helpful, thanks. :)


Offline sages

  • *
  • 182
  • +0/-0
    • http://www.sages.com.au
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #3 on: August 29, 2021, 10:02:35 AM »
Leave sme10 on the nvme ssd's with the operating system as is. Build a raid 5/6 array from the command line with the sata hdd's and then just mount the array in an empty shared folder or ibay.

Is it worth the complexity and stuffing around to try and combine the ssd's and hdd in the one array?
« Last Edit: August 29, 2021, 10:05:18 AM by sages »
...

Offline TerryF

  • grumpy old man
  • *
  • 1,821
  • +6/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #4 on: August 29, 2021, 10:47:54 AM »
Leave sme10 on the nvme ssd's with the operating system as is. Build a raid 5/6 array from the command line with the sata hdd's and then just mount the array in an empty shared folder or ibay.

Its what I would do and do use, mount the data drive/drives using the fstab file eg
# UUID=566cf197-313c-4109-95f7-155d4913a747 /home/e-smith/files/shares  ext3 

This then will write the contents of /home/e-smith/files/shares to the HD with the UUID shown, prefer UUID to names or /dev/sd? less chance of a stuff up

There is a wiki writeup about using fstab and if memory serves on creating a raid for data, needs some reading and research, I have not done this in past but sure is possible

Is it worth the complexity and stuffing around to try and combine the ssd's and hdd in the one array?

You will have an issue with sizes, smallest being the SSD, 500gb, this does not change for sme10 same as in sme9/8/7 :-) all must be same size.

I am sure one of the more experienced will drop a note or two on above..

and of course if you have the server hardware you could do it all in that

Added: running up a VM to investigate, 2HDs for OS and 4 for raid5 data, mounted by fstab
« Last Edit: August 29, 2021, 11:03:46 AM by TerryF »
--
qui scribit bis legit

Offline sages

  • *
  • 182
  • +0/-0
    • http://www.sages.com.au
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #5 on: August 29, 2021, 11:11:26 AM »
Google suggests that it may be possible to convert from raid 1 to raid 5 (didn't look for raid 6).
https://serverfault.com/questions/830708/raid-1-to-raid-5-using-mdadm/870710
https://dev.to/madmannew/how-to-convert-lv-or-md-raid1-and-0-into-raid5-without-losing-data-3mfl

It was a superficial look and I haven't looked at issues with drive sizes and mixed ssd/hdd.
I've converted a raid 1 sme9.2 to a non-raid single drive, mainly to avoid the time to backup/restore data, shrink the drive and simplify moving it to a vm, but I wouldn't make a habit of it. Conversions always seem to leave something not quite right and/or defer an issue until you need to redo it to fix it properly. ymmv
To me building computers should be similar to how I write software, just think about when it needs to be fixed in 5 years time and your once 'clever' solution is so convoluted that you can't remember what or why you did it that way. Keep it simple.
...

Offline ReetP

  • *
  • 3,722
  • +5/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #6 on: August 29, 2021, 12:11:35 PM »
Only had a brief read & will look more later but really, don't mix drives, particularly drive type but also size. Asking for trouble IMHO.

Also read my other post here about Raid or Ransid.

Also read about Raid 5 and large drives and MTBF.....

Sage gives sage advice!! Raid 1 + Raid5/6.


...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline TerryF

  • grumpy old man
  • *
  • 1,821
  • +6/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #7 on: August 29, 2021, 01:16:30 PM »
To me building computers should be similar to how I write software, just think about when it needs to be fixed in 5 years time and your once 'clever' solution is so convoluted that you can't remember what or why you did it that way. Keep it simple.

So, KISS principle
Keep Raid1 of two 500gb SSDs
Create a Raid 5/6 of 4 2TB HDs, mount the result using fstab to # /home/e-smith/files/shares for data.
iBays remain on Raid 1 SSDs

makes it simple and easy to upgrade/migrate whatever down the track, just needs a little research and knowledge to do the raid 5/6 setup...there are some excellent resources and How Tos online

--
qui scribit bis legit

Offline Jean-Philippe Pialasse

  • *
  • 2,747
  • +11/-0
  • aka Unnilennium
    • http://smeserver.pialasse.com
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #8 on: August 29, 2021, 08:45:22 PM »
same advice here, keep it stupid and simple.

keep your current raid 1 for system.

build a raid 5/6 with your new disks aside from that and then mount it wherever you want. (you can even mount it on /home/e-smith/files if you want and move old folders there beforehand.

this will ease your work on next system failure or upgrade: disconnect your data drive, migrate to next majore release and plug back your data.
If it had been done this way years ago it would have been as easy as that to unplug data drive from old zentyal and connect them back with the raid setup to access the data back on the SME 10.


Offline TerryF

  • grumpy old man
  • *
  • 1,821
  • +6/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #9 on: August 30, 2021, 01:26:53 PM »
Just as an exercise in 'how to'

Created a VM with two drives and installed sme10, ended with a Raid1 by default, system up and running fully yum updated..installed smeservr-shared-folders
Added 4 drives and configured as per wiki and online resources,
Ended with a VM with a 2xHD Raid1 with sme10 OS and a 4xHD Raid6 mounted in /homee-smith/files/shares
fstab edited as wiki and online articles show, issue with emergency mode was solved as described in online article

Not inluding research for issue on boot, all up maybe max 2hrs from install of sme10 to all up and done..issue on boot now resolved so easy peasy

Start here and take time to understand what you are doing
https://wiki.koozali.org/Raid
https://wiki.koozali.org/AddExtraHardDisk
https://www.tecmint.com/create-raid-5-in-linux/  (sme10 uses Raid 6 not 5)
http://www.chriscouture.com/centos-7-emergency-mode-on-bootreboot-with-raid-mounted/

My fstab
# /etc/fstab
# Created by anaconda on Mon Aug 30 16:33:28 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/main-root   /                       xfs     uquota,gquota        0 0
UUID=35b757aa-2d73-4ff5-beaf-679e7facefc1 /boot                   xfs     defaults        0 0
/dev/mapper/main-swap   swap                    swap    defaults        0 0
#/dev/md11 /home/e-smith/files/shares  ext4  _netdev  0 0
/dev/md11 /home/e-smith/files/shares  ext4  noauto,x-systemd.automount  0 0

or using UUID
#UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares  ext4  _netdev  0 0
UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares  ext4  noauto,x-systemd.automount  0 0


[root@smeraid6 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md11 : active raid6 sdb1[0] sdc1[1] sdd1[2] sde1[3]
      41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sda2[0] sdf2[1]
      15206400 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sda1[0] sdf1[1]
      510976 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>


Have fun :-)
« Last Edit: August 30, 2021, 04:49:11 PM by TerryF »
--
qui scribit bis legit

Offline ReetP

  • *
  • 3,722
  • +5/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #10 on: August 30, 2021, 02:34:31 PM »
Quote
UUID=095a3567:d939b7f7:4df1c433:99c3c7d8  /home/e-smith/files/shares  ext4  _netdev  0 0

For those of you who will be TL;DR the key to this is _netdev

The links will explain why in more detail. Note this could be a LVM, ext4 or any other type of partiton.

As Jean said:

Quote
The alternative to netdev is to add the raid information to grub.
Unfortunately since Centos7 you have to declare every raid and lvm needed at boot.

So the simple solution here is as per Terrys code above.

OK Mr Fage. Fun over. Get back to testing my hacks :lol:
...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline cpits

  • 10
  • +0/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #11 on: August 31, 2021, 06:18:41 AM »
Thank you everyone for your prompt responses in relation to this & thank you TerryF!

Hopefully will be doing this over the coming weekend as my partner & I have a number of projects on for our clients which take priority presently,

Another small issue creeping up is the on-board nic seems to be going to sleep so I will have to check the power management in the bios for the motherboard. Gigabyte B450 Aorus Elite with a Ryzen 5 1600.

Getting frustrating waking in the morning to check our email and network timeout's!

Kindest regards,
Trudi

Just as an exercise in 'how to'

Created a VM with two drives and installed sme10, ended with a Raid1 by default, system up and running fully yum updated..installed smeservr-shared-folders
Added 4 drives and configured as per wiki and online resources,
Ended with a VM with a 2xHD Raid1 with sme10 OS and a 4xHD Raid6 mounted in /homee-smith/files/shares
fstab edited as wiki and online articles show, issue with emergency mode was solved as described in online article

Not inluding research for issue on boot, all up maybe max 2hrs from install of sme10 to all up and done..issue on boot now resolved so easy peasy

Start here and take time to understand what you are doing
https://wiki.koozali.org/Raid
https://wiki.koozali.org/AddExtraHardDisk
https://www.tecmint.com/create-raid-5-in-linux/  (sme10 uses Raid 6 not 5)
http://www.chriscouture.com/centos-7-emergency-mode-on-bootreboot-with-raid-mounted/

My fstab
# /etc/fstab
# Created by anaconda on Mon Aug 30 16:33:28 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/main-root   /                       xfs     uquota,gquota        0 0
UUID=35b757aa-2d73-4ff5-beaf-679e7facefc1 /boot                   xfs     defaults        0 0
/dev/mapper/main-swap   swap                    swap    defaults        0 0
#/dev/md11 /home/e-smith/files/shares  ext4  _netdev  0 0
/dev/md11 /home/e-smith/files/shares  ext4  noauto,x-systemd.automount  0 0

or using UUID
#UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares  ext4  _netdev  0 0
UUID=29dbfbc8-5562-4800-9c35-2733af1c74d2 /home/e-smith/files/shares  ext4  noauto,x-systemd.automount  0 0


[root@smeraid6 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md11 : active raid6 sdb1[0] sdc1[1] sdd1[2] sde1[3]
      41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sda2[0] sdf2[1]
      15206400 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sda1[0] sdf1[1]
      510976 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>


Have fun :-)

Offline Jean-Philippe Pialasse

  • *
  • 2,747
  • +11/-0
  • aka Unnilennium
    • http://smeserver.pialasse.com
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #12 on: August 31, 2021, 02:38:09 PM »
for the ethernet port a quick google search will lead you to a lot of people reporting this issue with this mobo whatever the os they are using.
some reddit reports the need to update the bios.

your fastest fix might be to buy a supported ethernet card and pluging it in.

for easier reference please provide the output of

lspci| grep -i Eth


Generally speaking it is good to check Red Hat website for supported hardware before buying something.  Also another clue this might go wrong with the hardware is no Linux support from the Gygabite website for this mobo.
« Last Edit: August 31, 2021, 02:42:18 PM by Jean-Philippe Pialasse »

Offline cpits

  • 10
  • +0/-0
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #13 on: September 01, 2021, 03:11:02 AM »
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 16)
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)



for the ethernet port a quick google search will lead you to a lot of people reporting this issue with this mobo whatever the os they are using.
some reddit reports the need to update the bios.

your fastest fix might be to buy a supported ethernet card and pluging it in.

for easier reference please provide the output of

lspci| grep -i Eth


Generally speaking it is good to check Red Hat website for supported hardware before buying something.  Also another clue this might go wrong with the hardware is no Linux support from the Gygabite website for this mobo.

Offline Jean-Philippe Pialasse

  • *
  • 2,747
  • +11/-0
  • aka Unnilennium
    • http://smeserver.pialasse.com
Re: Scenario: NVMe raid1 wish to add 4x 2Tb drives for raid5/6
« Reply #14 on: September 01, 2021, 04:18:44 AM »
8168 realtek you will at least need to install elrepo kmod for it and not using the 8169 driver.

there are a few posts in the forum about that. 

and again looking at google maybe also need to  update your bios.