One of the servers I look after recently had a degraded RAID event. I confirmed the hardware status (using smartctl) and replaced the failed drive. This server is about 5 years old (an Acer tower) and as it rebooted, I went into the hardware RAID BIOS and monitored as it rebuilt the array.
After it rebooted successfully, I realized that something was odd - /proc/mdstat showed the following:
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda2[0]
195253888 blocks [2/1] [U_]
md1 : active raid1 sdb1[0]
104320 blocks [2/1] [U_]
unused devices: <none>
Prior to the failure, it had shown sda2 and sdb2 for md2 and sda1 and sdb1 for md1, as would be expected.
As I reviewed the RAID wiki on contribs.org, it struck me that if there is a hardware controller, the OS should have seen only one drive, not two. Am I correct in thinking this?
Has the configuration been incorrect all this time and if so, what's the best way to move forward?
Thanks for any assistance.