Koozali.org: home of the SME Server
Obsolete Releases => SME Server 7.x => Topic started by: stefan_gk on April 20, 2007, 09:12:21 AM
-
Yesterday I decide to check RAID status of my server via admin console and I got the following:
Disk redundancy status as of Friday April 20, 2007 09:38:59
Current RAID status:
Personalities: [raidl]
md1: active raidl hda1 [0] hdb1 [1]
102208 blocks [2/2] [UU] md2: active raidl hdb2 [1]
79931328 blocks [2/1] [_U] unused devices: <none>
Only some of the RAID devices are unclean.
Manual intervention may be required.
Only some of the RAID devices are unclean.
Manual intervention may be required.
This one make me very concerned!!!
I don't have any expeience with mdadm and chkdsk with new versions of SME and kernel.
Any help what should I do and some explanation will be very appreciated.
-
md1: active raidl hda1 [0] hdb1 [1]
102208 blocks [2/2] [UU] md2: active raidl hdb2 [1]
79931328 blocks [2/1] [_U] unused devices: <none>
Only some of the RAID devices are unclean.
Manual intervention may be required.
Any help what should I do and some explanation will be very appreciated.
Fist thing is to get a little more detail on md2mdadm --detail /dev/md2
Which should show you something that shows that /dev/hda2 has been failed (or somesuch).
As /dev/hda1 seems OK, you could try to fail, remove and then re-add /dev/hda2 (this will then get it resynced).mdadm /dev/md2 -f /dev/hda2 -r /dev/hda2
mdadm /dev/md2 -a /dev/hda2
mdadm --detail /dev/md2
Hopefully you will now have something like Version : 00.90.01
Creation Time : Sun May 28 15:42:05 2006
Raid Level : raid1
Array Size : 156183808 (148.95 GiB 159.93 GB)
Device Size : 156183808 (148.95 GiB 159.93 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Fri Apr 20 18:41:16 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 3 2 0 resyncing /dev/hda2
1 22 66 1 active sync /dev/hdb2
And it will resync /dev/hda2 within the existing array.
If not we'll have to look at some more drastic measures :wink:
-
I have done what you propose and after that I have
[root@server ~]# mdadm --detail --verbose /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Sat Oct 28 22:10:16 2006
Raid Level : raid1
Array Size : 79931328 (76.23 GiB 81.85 GB)
Device Size : 79931328 (76.23 GiB 81.85 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Fri Apr 20 12:12:17 2007
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 17% complete
Number Major Minor RaidDevice State
0 0 0 -1 removed
1 3 66 1 active sync /dev/hdb2
2 3 2 0 spare /dev/hda2
UUID : 9a5bd1ac:20d83580:40e21508:30f62149
Events : 0.2976151
and in admin console in states that now resyncing /dev/hda2 from /dev/hdb2 ...
-
Last message after finish of sync is:
Now all RAID devices are in clean state.
Thanks a lot for help!
-
Last message after finish of sync is:
Now all RAID devices are in clean state.
Great!
Now keep an eye on the raid stats to see if anything happens again. The drive itself didn't fail (otherwise there should have been a problem with /dev/hd1 as well), but it may be a symptom.
Good Luck
-
I'm new to SME, and am wondering: what is the command that you're using to check the software raid status. (And, yes, I did check the SME manual. I even googled around a bit, before posting.)
-
I'm new to SME, and am wondering: what is the command that you're using to check the software raid status. (And, yes, I did check the SME manual. I even googled around a bit, before posting.)
Login as admin in your terminal. Check manage disk redundancy!
Greets
-
I have the same raid problem (this topic reminded me to check my raid array). :lol:
But i have the problem with removing md2.
mdadm: hot remove failed for /dev/hda2: Device or resource busy
I don't want to manually unplug and mess with my server and i tryed rebooting so is there any other way?
cat /proc/mdstat gives:
Personalities : [raid1]
md1 : active raid1 hda1[0] hdd1[1]
104320 blocks [2/2] [UU]
md2 : active raid1 hda2[0]
38973568 blocks [2/1] [U_]
mdadm --detail /dev/md2 gives:
mdadm --detail /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Mon Oct 16 18:40:05 2006
Raid Level : raid1
Array Size : 38973568 (37.17 GiB 39.91 GB)
Device Size : 38973568 (37.17 GiB 39.91 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon Apr 23 23:09:58 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 3 2 0 active sync /dev/hda2
1 0 0 -1 removed
UUID : 6bad674e:eceb395b:ad0d495c:0498790d
Events : 0.8110286
-
Hi,
you don't need to remove anything : the systeme did it already.
You have to try to join back you second disk on the second raid array#mdadm /dev/md2 -a /dev/hdd2
Note : the message you provided shows thta your main array device, the one were all the system, the i-bay, ... resides.
This is an alert you have to care about, and monitor carefully how this goes in the next days, weeks, ....
You should have received some mail alert, look at the mdadm monitor option it was no enabled for you.
G.
-
Well i'm glad that the remove command worked but
#mdadm /dev/md2 -a /dev/hdd2
doesn't work.
I still get the same screen and no rebuild process.
Edit: Forget that. It just seems my server is slow. It is rebuilding now. (It took me 10 minutes to remove the disk. And another 10 to add it. :) Snail speed!!! ) Now i just don't know... Is my server this slow to follow commands or my internet line. :lol:
Gaston thanks. Have enterd the same commands but they seem to work better if i paste them from you. :shock:
-
I'm new to SME, and am wondering: what is the command that you're using to check the software raid status. (And, yes, I did check the SME manual. I even googled around a bit, before posting.)
Login as admin in your terminal. Check manage disk redundancy!
Greets
Yeah.. that was absolutely no help. You might as well have just said what you meant, "RTFM NOOB!!!"
I'm looking for the mdadm console command that generated the output of the parent post. If someone knows it, I'd appreciate you posting it. Thanks.
-
Hi,
the answer was exactly the correct one.
Let me be more detailled : when you connect to the server console, you have a function menu, and one of the choices is "manage disk redundancy" (choice 5)
From there you have the exact output displayed
From there you should be able to add a disk to the array, replace defective raid part, ...
Sometime I do like to use the terminal command instead ;) , and the same display can be retrieved (more or less) with a "cat /proc/mdstat" command.
All output on this thread, unless TrevorB's one, came from the command line
G.
-
Hi there,
Sorry for my newbie questions. I have bought a Dell PowerEdge SC1430 server with 2 raid 1 disks (250Gb). I was planning to install SME Server with a software raid but the machine came with a hardware raid (SAS 5iR U320 SAS Controller). Installation went fine and it works well.
What I am not sure of is whether the raid was configured correctly automatically. In the server-manager menu Disk Redundancy it tells me that it might be using hardware mirroring, which I believe is the case. But whether this is actually working I am not sure. I would greatly appreciate your help!
If I try: cat /proc/mdstat , I get
Personalities : [raid1]
md1 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md2 : active raid1 sda2[0]
243055296 blocks [2/1] [U_]
unused devices: <none>
The result of fdisk -l is:
Disk /dev/sda: 248.9 GB, 248999051264 bytes
255 heads, 63 sectors/track, 30272 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 30272 243055417+ fd Linux raid autodetect
Disk /dev/md2: 248.8 GB, 248888623104 bytes
2 heads, 4 sectors/track, 60763824 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
And if I try mdadm --detail /dev/md2 I get:
/dev/md2:
Version : 00.90.01
Creation Time : Fri Apr 6 02:48:56 2007
Raid Level : raid1
Array Size : 243055296 (231.80 GiB 248.89 GB)
Device Size : 243055296 (231.80 GiB 248.89 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon May 7 23:22:30 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 0 0 -1 removed
UUID : 212b400a:2e501081:36defda4:6d35900e
Events : 0.46590
I am not 100% sure how to interpret this information. Thanks so much for pointing me in the right direction!
cheers,
Mark
-
I have a similar issue, with two SCSI disks:
[root@server ~]# mdadm --detail --verbose /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Mon Nov 21 18:27:53 2005
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Device Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Wed Aug 1 21:41:11 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 38c0db3c:5b2ad44d:86e2fb95:39a1b377
Events : 0.7744
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@server ~]# mdadm --detail --verbose /dev/md
mdadm: cannot open /dev/md: No such file or directory
[root@server ~]# mdadm --detail --verbose /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Mon Nov 21 18:27:53 2005
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Device Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Wed Aug 1 21:41:11 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 38c0db3c:5b2ad44d:86e2fb95:39a1b377
Events : 0.7744
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@server ~]# mdadm --detail --verbose /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Mon Nov 21 18:27:53 2005
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Device Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Wed Aug 1 21:41:11 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 38c0db3c:5b2ad44d:86e2fb95:39a1b377
Events : 0.7744
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@server ~]# mdadm --detail --verbose /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Mon Nov 21 18:26:51 2005
Raid Level : raid1
Array Size : 143267584 (136.63 GiB 146.71 GB)
Device Size : 143267584 (136.63 GiB 146.71 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Aug 2 09:41:34 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 7ed9fc4a:085df8f6:4737e5b4:f7188a71
Events : 0.22491650
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 0 0 - removed
...Since there seems to be people around here with much better experience on RAID i thought someone could give me a hint on how to solve this one.
(The server, BTW, is 7.2 with latest updates applied)
-
You need to add the removed partition sdb2 back into md2
#mdadm /dev/md2 -a /dev/sdb2
then doing
#mdadm --detail --verbose /dev/md2
should show 2 active disks and a remirror in operation
-
markdeblois:
Are you sure you raid is faulty ???
You have an 'on the card' raid solution which will probably present itself to sme as just one drive since all the raid functions are occuring on the card. The results you are getting are very possibly normal for such a setup and may not be indicative of a non operating raid setup.
-
You need to add the removed partition sdb2 back into md2
#mdadm /dev/md2 -a /dev/sdb2
then doing
#mdadm --detail --verbose /dev/md2
should show 2 active disks and a remirror in operation
Excellent!
It's rebuilding right now as I write this reply:
[root@server ~]# mdadm --detail --verbose /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Mon Nov 21 18:26:51 2005
Raid Level : raid1
Array Size : 143267584 (136.63 GiB 146.71 GB)
Device Size : 143267584 (136.63 GiB 146.71 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Aug 2 11:34:18 2007
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 18% complete
UUID : 7ed9fc4a:085df8f6:4737e5b4:f7188a71
Events : 0.22495617
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 0 0 - removed
2 8 18 1 spare rebuilding /dev/sdb2
Many thanks for your quick answer!!!!
-
hello,
could someone with some raid experience give a look at this thread
http://forums.contribs.org/index.php?topic=37980.0
thanks a bundle
-
Hi All,
I have a question: I install SME on RAID1 on server with mainboard S3000AH; I disable RAID Controller and I user software RAID! Now, my problem is that all the time the Hard Disk LED Light is always ON and only sometimes became OFF.
Before, I install on standard SATA and the indicator was OK - wos ON only on Disk activity. What I need to do in this case? Is normal? This is my first time when I use software RAID
-
Hi, I'm a bit of a newb to sme and am still familiarizing myself with the linux/unix world so bare with me.....
I have a system with 3 hard drives.
/dev/hda is the OS drive. It's stand alone.
/dev/hde and /dev/hdg are 2 160GB HDs and are setup with a hardware raid.
However the system did not mount them and did not assign them with /dev/md3
/dev/hda was assign with /dev/md1 and md2
What I'm wanting to do is leave the OS on /dev/hda so that /dev/hde and hdg can be used primarily as storage and backup.
How do I mount my raid so that I can store files on it and point the i-bay there?
The only reason I choose to put the OS on the seperate hard drive is so that if the OS become currupt due to an update or my stupidity then I don't have to warry about loosing my backups on my raid.
Here is information about my setup.....
# mdadm --detail --verbose /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Sat Aug 4 17:33:54 2007
Raid Level : raid1
Array Size : 29920960 (28.53 GiB 30.64 GB)
Device Size : 29920960 (28.53 GiB 30.64 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sun Aug 5 16:17:41 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 48ae16a1:ccc6b479:122db933:43c74533
Events : 0.31121
Number Major Minor RaidDevice State
0 3 2 0 active sync /dev/hda2
1 34 2 1 active sync /dev/hdg2
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hda2[0] hdg2[1]
29920960 blocks [2/2] [UU]
md1 : active raid1 hda1[0] hdg1[1]
104320 blocks [2/2] [UU]
unused devices: <none>
# fdisk -l
Disk /dev/hda: 30.7 GB, 30750031872 bytes
255 heads, 63 sectors/track, 3738 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 fd Linux raid autodetect
/dev/hda2 14 3738 29921062+ fd Linux raid autodetect
Disk /dev/hde: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hde1 1 19457 156288321 83 Linux
Disk /dev/hdg: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdg1 1 19457 156288321 83 Linux
Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 30.6 GB, 30639063040 bytes
2 heads, 4 sectors/track, 7480240 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/dm-0: 28.5 GB, 28521267200 bytes
2 heads, 4 sectors/track, 6963200 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 2080 MB, 2080374784 bytes
2 heads, 4 sectors/track, 507904 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
# lspci
00:00.0 Host bridge: Intel Corporation 82845 845 (Brookdale) Chipset Host Bridge (rev 11)
00:01.0 PCI bridge: Intel Corporation 82845 845 (Brookdale) Chipset AGP Bridge (rev 11)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 05)
00:1f.0 ISA bridge: Intel Corporation 82801BA ISA Bridge (LPC) (rev 05)
00:1f.1 IDE interface: Intel Corporation 82801BA IDE U100 Controller (rev 05)
00:1f.2 USB Controller: Intel Corporation 82801BA/BAM USB Controller #1 (rev 05)
00:1f.3 SMBus: Intel Corporation 82801BA/BAM SMBus Controller (rev 05)
00:1f.4 USB Controller: Intel Corporation 82801BA/BAM USB Controller #1 (rev 05)
02:0c.0 Ethernet controller: Intel Corporation 82557/8/9 [Ethernet Pro 100] (rev 0d)
02:0d.0 Ethernet controller: Intel Corporation 82557/8/9 [Ethernet Pro 100] (rev 0d)
02:0e.0 RAID bus controller: Promise Technology, Inc. PDC20267 (FastTrak100/Ultra100) (rev 02)
-
I have a system with 3 hard drives.
/dev/hda is the OS drive. It's stand alone.
/dev/hde and /dev/hdg are 2 160GB HDs and are setup with a hardware raid.
If they are a hardware raid then they should appear to the OS as just 1 drive. I suggest that you use the inbuilt software raid and leave the psuedo hardware raid of your motherboard turned off (if you search you'll find lots on this)However the system did not mount them and did not assign them with /dev/md3
/dev/md3 is NOT a standard item in smeserver. It allocates /dev/md1 & /dev/md2/dev/hda was assign with /dev/md1 and md2
No, /dev/hda & /dev/hdg were assigned as raid 1 array /dev/md1 & 2 (but only using 30Gb of /dev/hdg2). Look at your output below...# mdadm --detail --verbose /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Sat Aug 4 17:33:54 2007
Raid Level : raid1
Array Size : 29920960 (28.53 GiB 30.64 GB)
Device Size : 29920960 (28.53 GiB 30.64 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sun Aug 5 16:17:41 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 48ae16a1:ccc6b479:122db933:43c74533
Events : 0.31121
Number Major Minor RaidDevice State
0 3 2 0 active sync /dev/hda2
1 34 2 1 active sync /dev/hdg2
What I'm wanting to do is leave the OS on /dev/hda so that /dev/hde and hdg can be used primarily as storage and backup.
That's a bit easier.
1. disable hardware raid
2. disconnect /dev/hde & /dev/hdg
3. Install smeserver
4. attach /dev/hde & /dev/hdg
5. partition /dev/hde1 & /dev/hdg1 as linux raid autodetect (if you need to)
6. create /dev/md3 (mdadm --create /dev/md3 --raidlevel=0 /dev/hde1 /dev/hdg1) - check this command as I'm working from memory
7. format /dev/md3 as ext3 (mkfs.ext3 /dev/md3)How do I mount my raid so that I can store files on it and point the i-bay there?
For ALL ibays to be on those drives, copy over the contents of your ibays directories to the new disks & add a line into /etc/fstabmkdir /tmp/newdisk
mount /dev/md3 /tmp/newdisk
cp -R /home/e-smith/files/ibays/* /tmp/newdisk/.
umount /dev/md3
mount /dev/md3 /home/e-smith/files/ibays
edit /etc/fstab and add/dev/md3 /home/e-smith/files/ibays ext3 defaults
As per anything that plays at this level, I suggest that you do this on a non-critical (or test) box first to make sure you get the process right. I would suggest that you may want to do some more reading first.
You may also want to read the AddExtraHardDisk howto (http://wiki.contribs.org/AddExtraHardDisk)
Good Luck
Trevor B :-)
-
Well thank you so very very much for the reply. And a very informative reply.
I talked with a linux buddy of mine too, though he's been out of the loop a little while he suggested the same thing but to drop the 30GB HD and just stick with the 2 160GB HDs and let sme handle the raids.
As he put it, "Intel is more then likely using a cheap halfway raid, where the Windows will typically talk to the BIOS and say, "oh this is a raid, I don't see it" until you install the raid driver, then it uses it as a raid, but it's still a software raid and not a true raid"
Also, I don't know that I can disable the BIOS raid and still be able to use those controllers.
You see the motherboard has 4 IDE controllers. Primary, Secondary, RAID1 and RAID2.
So, I'll play with it and see what I can come up with and being that this machine has a 2.6Ghz CPU with HT I should see next to almost no performance loss. Not that it matters for a home system that will handle very little.
Oh, and the Secondary IDE controller is being used by the CD ROM so I'm wanting to keep hde and hdg plugged in right where they are at on RAID1 and RAID2.
So again thank you. I'll post again with my results for anyone else that stumbles across this issue and is a newb like me.
-
I talked with a linux buddy of mine too, though he's been out of the loop a little while he suggested the same thing but to drop the 30GB HD and just stick with the 2 160GB HDs and let sme handle the raids.
Best bet!As he put it, "Intel is more then likely using a cheap halfway raid, where the Windows will typically talk to the BIOS and say, "oh this is a raid, I don't see it" until you install the raid driver, then it uses it as a raid, but it's still a software raid and not a true raid"
Correct.Also, I don't know that I can disable the BIOS raid and still be able to use those controllers.
You should be OK, they should then appear as normal IDE controllers (as they are really presented to the OS as a normal IDE with some raid support software in the chipset).Oh, and the Secondary IDE controller is being used by the CD ROM so I'm wanting to keep hde and hdg plugged in right where they are at on RAID1 and RAID2.
Newer chipsets seem to handle the differences between CD drives and HD's OK, so you shouldn't get a performance hit if you can't use your RAID1 & RAID2 connections.
Good Luck
Trevor B
-
OK, so I made the change.
I left RAID enabled in the BIOS and recieved no errors on reboot.
I installed SME server on the raid drivers while they were plugged into IDE channels (I will call) 3 and 4.
After installation here are my results......
mdadm --detail --verbose /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Mon Aug 6 15:51:09 2007
Raid Level : raid1
Array Size : 156183808 (148.95 GiB 159.93 GB)
Device Size : 156183808 (148.95 GiB 159.93 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon Aug 6 17:43:27 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : e1781cd7:fa525c81:1a4e771b:97b6b144
Events : 0.732
Number Major Minor RaidDevice State
0 33 2 0 active sync /dev/hde2
1 34 2 1 active sync /dev/hdg2
As you can see there are no problems. I did check the drive status after installation and they said that they were unclean but that syncing was being done and at the time was 85% complete.
Also. I changed the server from a normal server to a server gateway and putch eth1 on a DMZ to my firewall so that I can only open ports to the firewall portion of the server and so I can leave the client network 100% secure.
I'll try to draw it with text........
----------
| modem |
---------- ------------------ This is DMZ access that is firewalled
| / at both the firewall and server.
---------- --------- Only the ports necessary are open.
|Firewall |---| Server |
---------- ---------
| / ------------------This is Local network access only
---------- /
| Switch |
----------
Any ideas and/or suggestions are greatly appreciated as I'm still a newb that must know all that I can soak up. LOL