twijtzes
Your situation is difficult to accurately diagnose remotely, especially with the very little information you have provided.
If your server will not boot to the SME CD, then I assume there must be a hardware issue that is stopping it from booting up. Maybe the motherboard is faulty, maybe the drive controller card is faulty, maybe something else ? Perhaps you should take the system to a technician to determine if there is a hardware fault if you are unable to determine that yourself.
Again I'm guessing a possibility is that maybe when you put the replacement drive in, and started it rebuilding the array, perhaps the system wiped out any data, ie it resynced to a blank drive. It's hard to tell what has happened at this stage.
Are the drives using hardware RAID1 ie not software RAID1. If so then you need to keep the drives connected to that specific (functional) drive controller for the system to work correctly.
If you are using software RAID1 then you should be able to remove the drives and connect them to a machine with similar CPU type and if the drives are OK and contain data, then the server should boot up OK.
As I have said already, anything you do with the original drives adds the high risk that something will go wrong and destroy whatever data still remains on them. Typically you would do a bare metal disk clone copy using the dd command or similar, or compatible cloning software. That way you can do testing on the copied drives to see what data still exists, without tampering with the original drives.
If you want to play "slightly" dangerously, and taking great care, then you can put one of the original drives into the good server and mount it eg as /dev/sdc
Then you can use the working system to interrogate the drive. You can then attach another known good blank drive and copy the valuable data to it.
I think the viability of doing these steps will depend on whether the drive(s) are only readable when connected to their proprietary hardware disk controller.
See the various Howtos about disks which will assist you with testing, using, mounting, rebuilding etc, some are applicable, some will have ideas & techniques you can use, eg
http://wiki.contribs.org/AddExtraHardDiskhttp://wiki.contribs.org/AddExtraHardDisk_-_SCSIhttp://wiki.contribs.org/Disk_Managerhttp://wiki.contribs.org/Bootinghttp://wiki.contribs.org/Monitor_Disk_Healthhttp://wiki.contribs.org/Raidhttp://wiki.contribs.org/Raid:LSI_Monitoringhttp://wiki.contribs.org/Raid:Manual_Rebuildhttp://wiki.contribs.org/Recovering_SME_Server_with_lvm_driveshttp://wiki.contribs.org/USBDisksIf one of your drives is readable in another (cleanly installed OS) machine, then you could try using this Howto to recover eveyrything
http://wiki.contribs.org/UpgradeDiskHonestly if you are unsure about how to do all the above, then take the equipment to a Linux expert and get him/her to see if your data is recoverable.
I could, as you suggest, take out the SCSI drives from the old server and connect them externally to the new SME box. However, I have the feeling that this would permanently kill the old server, or am I wrong? What would be the best way to take the disks out and mount them externally ?
I do not think doing that that will "kill" the drives, it may break the RAID array, but the data should still be intact on each disk, the real issue may be that the drive is not readable as it needs to be connected to the specific controller card (if it is hardware RAID). I rarely deal with hardware RAID so cannot comment more.
Try connecting an original drive from the faulty server, to sdc (or the next spare port) on the other good server. This assumes sda & sdb are already being used for the RAID1 software array, so adjust accrdingly.
Verify what drives connected with
fdisk -l |more
Verify details of the old drive
fdisk -l /dev/sda
make a mount point eg
mkdir -p /mnt/olddrive
mount the drive
mount /dev/sdc1 /mnt/olddrive
For your drive it might be sdc2 or sdc3 depending on the drive setup
Use Linux commnads or mc (midnight commander) to read the old drive and see what the contents are eg
ls -al /mnt/olddrive
If OK you should see the directory listing of a typical sme server
Copy files and data etc, see this howto for what is in a normal backup
http://wiki.contribs.org/Backup_server_config#Standard_backup_.26_restore_inclusionsThen unmount teh drive
umount /mnt/olddrive
Then if all went well and depending what and where you copied data etc, do on the new server
signal-event post-upgrade
reboot
All the above is generic advice, your specific situation may require the advice to be adjusted or differ to suit.