Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: ragg987 on November 24, 2006, 10:23:24 PM
-
Hi, first time posting and new to the SME world. Having searched for a simple server (and tested many solutions, inc M$), I finally took the plunge and settled on SME7 about 2 months ago as the solution to my home server requirements. It has been rock steady and easy to use - almost perfect.
However a recent kernel upgrade left my system unusable. HELP, please.
I have created a 4 disk RAID5 array using LVM and pointed my iBays on to it. This has worked fine, though the most recent kernel upgrade has left my RAID5 unusable. What happens is that the system does not find my LVM. I believe this is because the raid5 detection is in error.
BACKGROUND
1 40Gb drive for the system (I installed SME with this drive only active)
The non-standard part is that I then added 4 x 320Gb drives, partitioned into 7 equal partitions per drive, created a 4-partition RAID5 for each of the 7 partitions, then used LVM to stitch 6 of the 7 RAID5 into one logical volume. Leaves 1 spare RAID5. Formatted the logical volume as ext3.
md5 | md6 | md7 | md8 | md9 | md10 | md11
---- | ---- | ---- | ---- | ---- | ----- | -----
hde5 | hde6 | hde7 | hde8 | hde9 | hde10 | hde11
hdc5 | hdc6 | hdc7 | hdc8 | hdc9 | hdc10 | hdc11
hdg5 | hdg6 | hdg7 | hdg8 | hdg9 | hdg10 | hdg11
hdk5 | hdk6 | hdk7 | hdk8 | hdk9 | hdk10 | hdk11
This has worked, however the latest kernel does not recognise the configuration. It thinks that the RAID5 has degraded.
With old kernel
cat /proc/mdstat
md5 : active raid5 hdk5[1] hdg5[2] hde5[0] hdc5[3]
133957632 blocks level 5,64K chunk, algorithm 2 [4/4] [UUUU]
etc... for md6 to md11
With new kernel
cat /proc/mdstat
md5 : active raid5 hdk5[1] hdg5[2] hdc5[3]
133957632 blocks level 5,64K chunk, algorithm 2 [4/3] [_UUU]
etc... for md6 to md11, except
md9 : active raid1 hde[3]
488368 blocks [4/1] [____U]
Previous kernel: 2.6.9-34.0.2.ELsmp
Upgraded kernel: 2.6.9.-42.0.2.ELsmp
Note that the change is "reversible" - if I boot into the older kernel, it works fine. Only the new kernel shows the problem.
WHY THIS CONFIGURATION? (if you are wondering about it)
Well, I want an expandable software RAID5 - start with 4 drives than add a few more as my data needs grow. The LVM2 growth method seems a bit risky as it has just about come out in very recent kernels. I preferred a this "cut-and-slice-partition" method as it allows me to use the more mature pvmove and vgextend commands to expand the RAID5.
-
Note that the change is "reversible" - if I boot into the older kernel, it works fine. Only the new kernel shows the problem.
I'd suggest that you tell RedHat, via their bugzilla.
-
Thanks CharlieBrady. Done.
Bugzilla Bug 217233: problems with software RAID5 and 2.6.9.-42.0.2.ELsmp
There was another bug report with RAID5, though that related to hardware RAID.
-
I have the same problems with a mirror of 2 40gb scsi drives softraid at a adaptec hostraid buildin controller on a ibm xseries e-server.
When i load a newer kernel it crashes on some lvm error.
I reverted back to the original kernel and it boots fine.
Would also like a solution for this.
-
Would also like a solution for this.
Report to RedHat, via bugzilla.redhat.com.
-
goin to , just wanted the topicstarter to know he isn't alone with his problem.
-
Same issue here at centos
http://lists.centos.org/pipermail/centos/2006-September/069415.html