Koozali.org: home of the SME Server
Obsolete Releases => SME Server 9.x => Topic started by: Michail Pappas on March 17, 2016, 12:22:12 PM
-
During installation of 9.1, I elected in the initial screen to do an install without LVM and without any mirroring (server is running as a VM). However, I received mails that I am running in degraded mode. From that, it seems that /dev/sda1 is indeed allocated to a dev array, whereas the actual storage is on a simple ext4 /dev/sda3 partition:
# fdisk -l /dev/sda
Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b51e5
Device Boot Start End Blocks Id System
/dev/sda1 * 1 32 256000 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 32 424 3145728 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 424 3917 28054528 83 Linux
# mount
/dev/sda3 on / type ext4 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md0 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
1) Is this a bug?
2) Can I somehow easily convert the partition to a normal linux boot partition?
-
no, it's a feature.. /boot is always on raid1
anyway, you can safely ignore the message
-
Thanks. Seems a bit strange, however I presume there was some reason behind this choice.
-
You can mark the array so that it work with a single drive, and will not report a degraded state:
mdadm --grow /dev/md0 --raid-devices=1 --force
-
Done that Daniel, thanks!
-
However, I received mails that I am running in degraded mode.
You shouldn't. If there has been no drive failure you shouldn't receive any warning emails. Please file a bug report with details.
-
That will be a problem, since those mails are deleted... Can I revert to the previous situation somehow, for testing purposes?
-
Never mind, I will try to reproduce this on a clean install. If I do, I'll create a bug report directly, can you please instruct which logs to upload?
-
On a fresh 9.1 install I did not receive such mails, until I started the /etc/init.d/mdmonitor service. When I did, a "DegradedArray event on /dev/md0:test" was generated.
mdmonitor was not running before. It was also not running on my production system, whereas the issue was initially encountered.
Not sure if this is a bug or feature, so how would you like me to proceed?