Koozali.org: home of the SME Server
Legacy Forums => General Discussion (Legacy) => Topic started by: tobiasb on February 08, 2005, 10:02:35 AM
-
Hello,
I want to know how I can integrate a RAID System (scsi, sata...) in a running sme (6.01-01) mailserver. The Raid System should mirror the mail-directorys. So I must know, how I can make it to bring the directorys to another harddisk and to notify the system about it. And, of course, to tell the system that I installed a Raid System. (I dont think that copy-paste would work). I hope that anyone will know what I mean. Because of my bad english, I can't describe my problem further.
Note: I am a Linux-newbie, my workmate set up the server.
-
Maybe not the answer your looking for but just wanted to throw out the solution that I use since nobody had replied yet.
I'm using a product from Arco found at http://www.arcoide.com
The one I'm using is a 5.25 baymount Disk Mirroring RAID Controller but they have lots of different models available.
-Jeff
-
Raid mirroring will mirror everything, not just select folders. IMO, the best way would be using a hardware raid. In that case the system doesn't need to know the raid array exists. RAID is best set-up before the OS is installed. You can select software raid duing the SME setup.
-
IMO, the best way would be using a hardware raid. In that case the system doesn't need to know the raid array exists.
Hey hordeusr,
What hardware raid have you used. I've been happy with Arco so far but would like to hear about others used on the SME Server.
Thanks
-
Mostly SCSI raid on some windoze servers. My next SME box will have a 3ware card with some SATA drives in a RAID 5 configuration. Just waiting on 6.5 to get done and then I'll start doing some testing.
-
Best bet is to go with a hardware RAID. This is not as resource intensive as a software RAID. You'd need a RAID controller card, and this would have to be configured before installation of the OS.
On my server, I built a 5 disc SCSI RAID5 using a Compaq 221 RAID controller card. Then, you just install SME server as "single disk" and the hardware controller will take care of the rest. Very simple to set up, and more efficient than a software (program-controlled) RAID.
-
Hi,
Best bet is to go with a hardware RAID. This is not as resource intensive as a software RAID.
I will never use HW-raid in a SOHO Environment again:
Our Office Server with < 10 Users and quite a lot of traffic needs a hardware upgrade (RAM and CPU). It is running on a SCSI RAID1 connected to a mylex controller.
None of these useful disk imaging tools is able to clone the system drive to another "normal" harddisk. I tried some of them including ghost, g4u, acronis and mondo.
So in case of a hardware upgrade or when you need to change more than one harddisk, you are fuxxed up and have to do a complete reinstall.
Not my kind of thing. :cry:
And i never experienced resource problems in an office environment with SW-Raid. The cpu load of my servers is normally below 5 % for 90 % of the day.
-
Best bet is to go with a hardware RAID.
Ever tried to move a living raid ...
Ever tried to insert a raid "live" ... like it's wanted here?
Ever tried to "revive" a dead hardware raid ?
...with software raid it's writing/copying a raidtab and there you go...
I'd hate to do that with anything else but a 3ware (even then), especially when the cpu is above 2GHz...
(where in my experience the software solution might turn out faster anyway !)
But don't listen to me ... listen to
O'Reilly Books: "Managing RAID on Linux By Derek Vadala"
http://www.oreilly.com/catalog/mraidlinux/index.html
Software RAID has unfortunately fallen victim to a FUD (fear, uncertainty, doubt)
campaign in the system administrator community. I can’t count the number of system
administrators whom I’ve heard completely disparage all forms of software
RAID, irrespective of platform. Many of these same people have admittedly not used
software RAID in several years, if at all....
Not only is Linux’s software RAID open source, the inexpensive
hardware that runs Linux finally makes it easy and affordable to build
reliable software RAID systems. Administrators can now build systems that have sufficient
processing power to deal with day-to-day user tasks and high-performance
system functions, like RAID, at the same time.
...and before I forget, the system that's "serving" the stuff I'm writing right here, right now is a 2,2GHz 1 TeraByte SME 6.0x raid0, raid1 and raid5 that has been running since 7/2004 and taken down just to install&test a new apc ups.
I could plug in a new faster mainboard and be up and running within the hour ;-)
just my 2 c
Regards
Reinhold
-
Tobias,
Frankly - this wouldn't be the first task I'd try as a linux newbie.
Let me recommend an unorthodox but fairly safe, newbie proof so to speak, way here...
(1) Buy 2 new harddisks (HD) for you system
(2) install them as IDE0 Slave = /dev/hdb in linux parlance _and_ IDE1 Slave = /dev/hdd (linux).
...assuming you now have 1 HD as IDE0 Master and the CDROM as IDE1 Master... else modify accordingly!
(3) unplug your old HD (only your new HDs are present)
(4) boot from cd-rom, make sure you can boot from hdb as well (bios), then install a fresh SME 6.ox from cd which will ask you if it should set up as Raid1 - where you say YES.
(5) plug in your old hd.
(6) mount the old hd somewhere
(7) Boot new SME and go to single user mode (shutdown now)
(8) (next thing may turn out ricky so I recommend to) set up all users anew on the new SME
(8) copy the whole "/home/e-smith directory" (where mail home etc. resides) from the old HD to the new (raid1) HD's
(9) ...the old hd is free now. You can keep it as a toy, mount for backup purposes ... or modify my above proposal so you do not need to buy 2 HDs ;) ;) ;)
Please read Michiel Blotwijk Faq:
http://mirror.contribs.org/smeserver/contribs/mblotwijk/HowToGuides/AddExtraHardDisk.htm
...it will help you to modify the above procedure to your likings...
Regards
Reinhold
P.S.: Do try to understand the above procedure - this comes without guarantees and is written up in just a few minutes !
-
:: ..and before I forget, the system that's "serving" the stuff I'm writing right here, right now is a 2,2GHz 1 TeraByte SME 6.0x raid0, raid1 and raid5 that has been running since 7/2004 and taken down just to install&test a new apc ups.
Reinhold, can you give the details of your TeraByte config (# of drives, controller, channels, etc)? I think breaking the terabyte barrier is an interesting topic.
BTW, i was about to buy a recently decided upon 3ware controller card until I read your post. Back to contemplation.
thanxs,
dak
-
dak,
That box is pretty simple to describe - although I'm on the road now accessing via ssh.
Just remember one thing: Linux Software raid is partition based _not_ drive (=hardware/disk) based.
Setup:
swap raid0 (redhat does this anyway)
"/"boot & "/" raid1
part4 /home/e-smith/files raid5
Mainboard is MSI-KT4AV 2.2GHz Athlon (clocked 1.5GHz)
controllers are: VIA8235 (Mainboard) + Promise Ultra100
(master/slave discussion is imo (I designed chips) "history")
(pci32 slot is limiting speed factor anyway)
HD's are 8x HDS722516VLAT80 160Gb on udma5
Capacity Array: 1074 GB (153,39 *(8-1))GB
Performance Data:
-(dmesg)-
raid5: measuring checksumming speed
...
raid5: using function: p5_mmx (4614.000 MB/sec)
- Single HD -
[root@tera root]#hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 64 MB in 1.11 seconds = 57.66 MB/sec
- RAID5 array -
[root@tera root]#hdparm -t /dev/md3
/dev/md3:
Timing buffered disk reads: 64 MB in 0.99 seconds = 64.65 MB/sec
[root@tera root]# hdparm -T /dev/md3
/dev/md3:
Timing buffer-cache reads: 128 MB in 0.50 seconds =256.00 MB/sec
REAL SPEED via 100Mbit (!) net (mostly ignored in discussions :roll: )
718 MByte / 102 s = 7.04 Mbyte/s
(write to server from network client ...THATS what's limiting normal setups !!!)
HTH - (from here that's about the data I get :) )
Regards
Reinhold
P.S.: It's a bit tricky to setup - with SME "too much" automation <eg>. You need to know kickstart and/or pxe booting - a bit of lilo...
P.P.S.: Don't forget to give lilo a 2nd (3rd) boottrack if you want to recover from bootfailures easily...
-
I forgot :-o that setiathome is running at that machine while I measured.
Setiathome is taking arount 99.7 % of cpu (top)...
But since linux & seti are very well behaved or "nice" <g>,
I doubt that timings would be getting "(much) better".
In any case, I think this is also the time/place to confirm azche24's information that
with a system like that you rarely will have more that 5% cpu load in general ...
(cpu which the owner could give to seti/boink as in that machine :-)
- WITHOUT PERFORMANCE HIT :-D)
(Re-) Syncing that big an array btw takes around 4-5 hours ... so do not get nervous :-)
...especially when you test a new UPS setup - which can get VERY boring...
Regards
Reinhold
-
When I set up my SME server two years ago, I was heavily encouraged to go with a hardware RAID for performance reasons. Being fairly new to all of this, I was in no position to argue. Besides, a hardware RAID was very easy to set up. Realistically, I'm not sure performance was an issue on my quad P3 machine, but on my dual 1.2G it might have had more of an impact.
Reinhold, you raise a very interesting scenario: you have your /home/e-smith/files mounted to your RAID 5. How do you maintain your disk quotas with this setup? One on server, I have a RAID 1 with a separate SCSI drive mounted to /home/e-smith/files/users to provide more space for mail storage. Problem is, I can't get quotas active on that mounted drive. According to Contribs, it's not a supported feature and I've never found a good howto to get the disk quotas working again. Any chance of posting this?
-
Brenno,
Who told you where that quota cannot be enabled on such a mountpoint?
Try a quotacheck "check" like I did (live) here:
root]# quotacheck -vugc /dev/md0
quotacheck: Quota for users is enabled on mountpoint /home/e-smith/files so quotacheck might damage the file.
- IIRC quota in SME is enabled only if listed as such in mtab. Have you checked there (/etc/mtab = /etc/fstab) ?
- Also you need to activate quota management for at least one user in SME (Server Manager > Collaboration > Quotas > Modify )
If you don't mind to spend (quite) some time you could use:
# quotacheck -vdugcf /dev/mdX
SYNOPSIS
quotacheck [ -gubcfinvdMmR ] [ -F quota-format ] -a |
filesystem
OPTIONS
-v quotacheck reports its operation as it progresses.
Normally it operates silently.
-d Enable debugging mode. It will result in a lot of
information which can be used in debugging the pro
gram. The output is very verbose and the scan will
be slow.
-u Only user quotas listed in /etc/mtab or on the
filesystems specified are to be checked. This is
the default action.
-g Only group quotas listed in /etc/mtab or on the
filesystems specified are to be checked.
-c Don't read existing quota files. Just perform a new
scan and save it to disk. quotacheck also skips
scanning of old quota files when they are not
found.
-f Forces checking of filesystems with quotas enabled.
This is not recommended as the created quota files
may be out of sync.
Regards
Reinhold
-
...before you begin to wonder 3 more notes:
- I used a different server here where md0 is a 6xHD Raid5 (software of course :-D
- symlinks (ls) and quota don't quite mix (for me...)
- the above _is_ the FAQ you wanted ... just ssh in ... and execute that command (with YOUR filesystem (I use the device/partition name as you see hdb1 md0 etc.))
... if it doesn't fail "you've got quota" :pint:
Regards
Reinhold
-
Reinhold, just to clarify, I wasn't told that you couldn't have quotas on multiple filesystems, just that SME didn't support it. Meaning, the server-manager panel would only check the quotas on the root filesystem, not filesystems mounted after OS installation. It's listed in the bugtracker at: http://no.longer.valid/mantis/bug_view_page.php?bug_id=0000125
While this does not mean that you can't have the quota working on additional filesystems, it just meant that, for newbies like me, it wasn't easy to set up. I scoured the forums for help on this issue before posting to the bug tracker and found none.
For a refresher of my previous posts on this topic, hit:
http://forums.contribs.org/index.php?topic=24391.msg99399#msg99399
Oddly enough, it was you and I which filled that thread :) I ended up enforcing quotas the old fashioned way - emailing users to warn them of excessive file accumulation.
-
Hi Reinhold
You said
P.P.S.: Don't forget to give lilo a 2nd (3rd) boottrack if you want to recover from bootfailures easily...
Would be great if you wrote this up into a little howto and posted back to the thread.
TIA
-
@brenno
I've replied to that old thread of yours where it should have been (-sry but RL often gets in the way)
(we are way into vandalizing tobiasb's thread :-o )
@smeghead
If you can live with a short answer here it is:
lilo -v
You need to have one of your primary partitions marked "active". This can be done most easily using fdisk.
Running /sbin/lilo from Linux should puts LILO back into the currently active partition. -HTH- (RL's here)
Regards
Reinhold