Koozali.org: home of the SME Server
Contribs.org Forums => Koozali SME Server 10.x => Topic started by: robf355 on November 12, 2021, 07:08:31 PM
-
Hi
I'm currently using sme 9.2, and are about to upgrade. currently the operating system and data are on a two disk raid array, I'd like to have the operating system on one 500GB disk and use the two existing raid drives for the data.
Will the installer allow me to pick the raid array for data and put the the operating system on the other disk and then when all is installed I just do a data restore, or, do I have install sme onto the 500GB disk then add in the two raid disks and mount them on /home/e-smith/files.
I saw one post which mentioned adding the raid disk into fstab after the install is finished, I'm just not sure about the mount point.
Any help appreciated
Regards
Rob
-
I have a single 500gb SSD witrh sme10 OS only, had contemplated a raid1 but decided not to with the ssd
Seperate 2tb HD with data mounted in /home/e-smith/files/shares and set in fstab
UUID=a32157ac-70db-4c68-aee7-37e922a9589a / xfs uquota,gquota 0 0
UUID=aa52432c-a8ec-40ed-97bc-0a48ac5fdc56 /boot xfs defaults 0 0
UUID=7829dc97-96e6-4460-8369-910255609175 swap swap defaults 0 0
UUID=566cf197-313c-4109-95f7-155d4913a747 /home/e-smith/files/shares ext3 usrquota,grpquota,,noatime,acl 0 0
I should have formated the data drive with xfs at the time but did this a couple of years ago with sme9
I have spent time creating a raid with multi HDs as data store and doing some testing its not hard, relatively straightforward, wiki has ways and means, but decidied to stay with single data drive at this time, a bit more effort needed with fstab setup as well..
Original setup on sme9 was a raid 1 of 2x1tb HDs, so the data drive was created by simpley copying all that was in the /home/e-smith/files/shares dir to a seperate HD that was not part of the raid, and then mounting it manually at first to confirm all was good and then via fstab..also this system had the shared folders contrib hence /home/e-smith/files/shares/ directory you are free to create/use anything you like almost wherever you like.
I left all user space and ibays on the OS drive, backups of that was a must do
Then leap of faith was to remove data drive and details from fstab, reboot system with original raid disks and then delete contents of /home/e-smith/files/shares/*.*
Put data drive back and re-add details to fstab and reboot, and hope like eff that all was good (I did have backup that was good), it was, all good that is :-)
Now had a system where I could install a clean OS (sme9 or sme10) to a raid or single disk make sure it was good and updated, edit fstab with details as above and reboot..makes migrating a simple and easy process. I remove data drive from fstab and only backup the os drive, the data drive is safe, upograde system to new hardware and OS, put data drive back in edit fstab and so far so good..
This is my way, others would/may do differently, if you try it suggest you do a test setup first with minimal not essential data etc..
Have fun
-
Will the installer allow me to pick the raid array for data and put the the operating system on the other disk
Simple answer NO
-
Some reading to be going on with :-) enjoy
https://wiki.archlinux.org/title/fstab
https://www.linuxhelp.com/how-to-configure-raid5-in-centos-7
https://www.tecmint.com/create-raid-5-in-linux/
https://www.chriscouture.com/centos-7-emergency-mode-on-bootreboot-with-raid-mounted/
http://codingberg.com/linux/systemd_when_to_use_netdev_mount_option
-
I did something similar, I started with an sme9.2 dual 3tb raid 1 setup. I didn't have any spare drives big enough to put all of the data on so I did a step upgrade to end up with sme 10 on a single ssd with /home/e-smith/files on a raid 1 rust array. ie started with 2 3tb drives sme9.2 raid 1, ended sme 10 single 500gb ssd system and 2 3tb raid 1 data. I did have my standard daily data backups as a last resort.
I followed the steps below (no detailed command line info sorry, I googled how do do each step that wasn't standard and basically followed the process described in the wiki for migrate helper, just performing the extra steps as i went)
Convert from raid 1 to 3 drive
With an existing sme9 2 drive raid 1 install.
Use migrate helper to backup existing system.
Break existing raid1 by failing one drive. This drive will be the start of our new single partition raid 1 data array. Note which drive this is.
Install new ssd as OS drive, install sme10 as single drive install.
Make sure existing drives are not mounted.
Restore migrate helper.
Create new degraded raid 1 array with 'failed drive'.
mount new raid 1 data drive in /home/e-smith/files and modify mdadm.conf and fstab so automounts.
Check after reboot that all working ok.
df -hT should show if mounted correctly.
Temp mount old raid 1 system drive.
copy all data from old /home/e-smith/files to new data raid drive.
Unmount old raid drive.
Repartition old raid drive as per new data raid drive and add to new raid array.
Finished.
I have copies of the as built /etc/fstab and /etc/mdadm.conf. Goolge 'centos 7' and the required process for details
-
well installer is supposed to allow you that.
you can assigne a folder to mount to to every disk or raid devices
-
well installer is supposed to allow you that.
you can assigne a folder to mount to to every disk or raid devices
In my case I was reusing drives and wanted to retain the data for a disk to disk copy as backup/restore was very time consuming (in excess of a day) so I decided on a hybrid/staged update allowing for disk to disk transfer without using any other drives for data. Not for everyone but obviously possible if you are short on spare large drives :-?
-
well installer is supposed to allow you that.
you can assigne a folder to mount to to every disk or raid devices
Damn, learn new things all the time :-)
-
well installer is supposed to allow you that.
you can assigne a folder to mount to to every disk or raid devices
Beat me to it :-)
-
In my case I was reusing drives and wanted to retain the data for a disk to disk copy as backup/restore was very time consuming (in excess of a day) so I decided on a hybrid/staged update allowing for disk to disk transfer without using any other drives for data. Not for everyone but obviously possible if you are short on spare large drives :-?
in that case do you know a dedicated wiki page happen to exists about how to mount ibays folder or similarly any parent folder to a different set of disks ?
https://wiki.koozali.org/index.php?title=AddExtraHardDisk
follow it and give us feedback
-
in that case do you know a dedicated wiki page happen to exists about how to mount ibays folder or similarly any parent folder to a different set of disks ?
Short answer Yes :-) followed in my fumblings
-
in that case do you know a dedicated wiki page happen to exists about how to mount ibays folder or similarly any parent folder to a different set of disks ?
https://wiki.koozali.org/index.php?title=AddExtraHardDisk
follow it and give us feedback
I did. Using the installer to format and mount disks isn't going to help if you want to retain the old data on the disks but just change the folder location. My situation was a hybrid that required a staged migration/format/mount. AFAIK the installer does not cater with that situation.
[rest of reply deleted]
-
Hi
thanks for the replies and suggestions, I made a start
1. started the installer, with the following disks fitted
/dev/sda - 500GB, part of the original raid set with /dev/sdb
/dev/sdb
/dev/sdc - 500GB - new disk which i want to install the operating system on.
installer tells me that i have to select my own partitions, but only shows sda, sdc as available disks, nothing else shown - not sure why.
2. Removed sda/sdb and installed sme on dev/sdc, all went ok boots up no problems.
restarted with sda/sdb inserted, sme detects that there is a raid array (md126 - '/', /md127 - boot) I can mount md127 but if I try to mount md126 i get an error "unknown filesystem type lvm member"
lvscan reports the sme10 boot and '/' partitions as active, md127/md128 as inactive.
I ran vgchange -ay but the disks are still marked as inactive.
That's about as far as I have got, having backed everything up twice, I'm happy to erase the raid array, but can't understand why the installer doesn't recognise the second hard disk.
-
If it detects pretty well anything on a disk it will ignore it.
Make sure you wipe it completely first.
-
If it detects pretty well anything on a disk it will ignore it.
Make sure you wipe it completely first.
Hi
Solved the disk issue, the disk raid signatures was causing the installer to reject the disk, I found this site offers a solution.
https://leo.leung.xyz/wiki/Clear_RAID_Signatures_on_Linux
There is a menton about anaconda installers rejecting disks with raid signatures
I then set the raid array up using:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
all working now, just have to check the mount point
thanks for the help
Regards
Rob
-
There is a menton about anaconda installers rejecting disks with raid signatures
Yup - it is trying to prevent a users fast fingered 10 second brain fade (we all have them).
-
well your situation is in fact pretty different from the tittle of the original post, you do not try to have data on a different disk, you try old SME9 disk after mounting it to the new SME 10...
as you saw it was a LVM over raid, so need to activate raid, then activate lvm and finally mount the lvm member.
if you try to activate lvm from a SME9 on a SME10 with lvm it will fail as the two LVM will be named the same.
You need to rename the LVM before activating, and to be sure you do not mess with the current SME10 I suggest to do this on a different linux computer with only the old lvm mounted.