Koozali.org: home of the SME Server

Add a hard drive

co997

Add a hard drive
« on: January 05, 2007, 07:10:20 PM »
I am deeply entrenched in the Windows file structure and am trying to understand how to make additional drives available for use with SME Server 7.1.  I have installed the software (very easy) and added an additional hard drive using the tutoral found at http://mirror.contribs.org/smeserver/contribs//mblotwijk/HowToGuides/AddExtraHardDisk.htm.  I got the drive formatted and mounted. Everything is fine up to the point of making the new drive actually usable on my server. I think that I understand the concept of ibays (similar to Windows group shares), but I am lost on the concept of symlinks. Do I have to go to the command line each time that I want to create a new group sub-directory (oops.. I meant to say ibay)?  Or can the new, formatted but empty drive be accessed in the Server Manager? I am very new at this, so be kind.

Offline kruhm

  • *
  • 680
  • +0/-0
Add a hard drive
« Reply #1 on: January 07, 2007, 07:00:47 AM »
put all your files on 1 hd (don't add the 2nd hd for files)

Offline CharlieBrady

  • *
  • 6,918
  • +3/-0
Re: Add a hard drive
« Reply #2 on: January 07, 2007, 07:16:38 AM »
Quote from: "co997"
I am deeply entrenched in the Windows file structure and am trying to understand how to make additional drives available for use with SME Server 7.1.


If you have a second drive, we'd advise you to use it as a mirror of the first drive, to improve reliability. Using an additional drive as part of a mirror is a built in feature.

Offline harshl

  • **
  • 32
  • +0/-0
Add a hard drive
« Reply #3 on: January 08, 2007, 02:29:38 AM »
I am having trouble getting a second HDD mounted in my server. I ran through the steps found at http://mirror.contribs.org/smeserver/contribs//mblotwijk/HowToGuides/AddExtraHardDisk.htm in a VM test environment on SME7pre3 and everything worked great. However now I am trying to do it in production on SME 7.1 (upgrade from 7.0) and I receive the following error:

mount: /dev/sdb1 already mounted or /mnt/backup busy.

Everything in the document works fine up to the point I try and mount the disk

Running mount shows the following:

/dev/mapper/main-root on / type ext3 (rw,usrquota,grpquota)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/md1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

It is clearly not mounted.

Running fuser -m /dev/sdb1 and 2 (Both valid partitions with new ext3 file systems in them) returns nothing (New command line).

Running mdadm -Q /dev/sdb /dev/sdb1 /dev/sdb2 returns
/dev/sdb: is not an md array
/dev/sdb: No md super block found, not an md component.
for each command.

I have never worked with Linux software raid before but my suspicion is that it is taking control of the drive some how.

My system is using the nv_raid SATA controler with a seagate 7200.10 320GB as the main system drive attempting to add another seagate 7200.10 500GB drive on the same controller.

fdisk -l returns:

Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14       38913   312464250   fd  Linux raid autodetect

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       36474   292977373+  83  Linux
/dev/sdb2           36475       60801   195406627+  83  Linux

Disk /dev/md2: 319.9 GB, 319963267072 bytes
2 heads, 4 sectors/track, 78116032 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md1: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

I have seen mention of the following command being put in /etc/rc.local but since I have little to no software raid experience I do not know if this is something I should even try as a short term solution:

dmsetup remove_all; sleep 10; mount /dev/hde1

Any ideas or suggestions would be appreciated.

Thanks,
-Landon

Offline idp_qbn

  • *****
  • 347
  • +0/-0
Add a hard drive
« Reply #4 on: January 09, 2007, 07:28:12 AM »
I am not sure about all this, but it sounds like your system is trying to do BOTH software raid and hardware raid: a no-no! Use one or the other - this means you have to a) disable software raid (somehow) or b) disable the hw raid (somehow).
An important note about sw raid is that both disks must be the same size. I think if they are not (as in your case) the system regards the second disk to be the same size as the first (ie 320Gb instead of 500Gb) - but then again......

Good luck
Ian
___________________
Sydney, NSW, Australia

Offline harshl

  • **
  • 32
  • +0/-0
Add a hard drive
« Reply #5 on: January 09, 2007, 07:28:37 PM »
I am 100% sure that hardware RAID is shut off in the BIOS, so that is not the case, however, I agree that the software RAID is trying to take control some how. Anyway to specifically tell the software to leave this drive alone?
Most posts I have read online in regards to CentOS and other distro's say that as soon as they removed software raid RPM's from the system they were able to mount it just fine.
I don't think that would be a good idea on an SME server, I am afraid it would leave the system unbootable.
Any other ideas? Anyone know how to exclude a disk from the RAID software, I realize it is not part of an array now, but it does seem to be locked by something, perhaps the RAID software thinks it owns it?

Any help is appreciated, I am without backups until it is mounted.

Thanks,
-Landon
-Landon

Offline harshl

  • **
  • 32
  • +0/-0
Add a hard drive
« Reply #6 on: January 13, 2007, 08:07:53 AM »
I have found the following.

If I run a dmraid -r it returns exactly as I had suspected.
/dev/sdb: pdc, "pdc_eccajicj", stripe, ok, 976772992 sectors, data@ 0
Some raid software trying to own the device.
Is dmraid a part of LVM or can it be removed? If it is required how do I make it let go of this device.

Thanks.
-Landon

Offline Gaston94

  • *****
  • 184
  • +0/-0
Add a hard drive
« Reply #7 on: January 13, 2007, 10:30:50 PM »
Hi,
it's quite difficult to understand what you have done and what is your problem :?

On a standard SME7 installation, there are two software RAID array defined : /dev/md1 and /dev/md2 . The partition are defined on these devices, either within a LVM configuration ("main" VG) or not (/boot).
You cannot deactivate the raid metadevices on a fly without loosing things

I do not know anything about "dmraid", SME7 is dealing with mdadm tools ... and I do not see why you are looking at some raid stuff in your case ...

If you were suspecting, your second disk partitons to have join one or more of the md array, you'b better run  a  cat /proc/mdstat to confirm.
Furthermore, your command is false : you have to set your disk faulty before being able to remove it from the array.
If you want to know more informations about your raid devices, use command like :
Code: [Select]
# cat /proc/mdstat
# mdadm --detail /dev/md[1-2]

If your disk is not referenced in the raid definition, you have no entry about it in your fstab , try to reboot your system and report detailled status.

G.

Offline harshl

  • **
  • 32
  • +0/-0
Add a hard drive
« Reply #8 on: January 13, 2007, 11:53:59 PM »
In that case let me try and make it clear what I have done and what my problem is.
I did a clean install of SME7.0 on a new server with one hard drive installed. I then upgraded via the server-manager web console to SME7.1 After that I added a second hard drive for backups following the documentation URL I posted above.
Everything in that documentation went fine until it came time to mount the drive. At that point I received the error listed above:

mount: /dev/sdb1 already mounted or /mnt/backup busy.

I was not in the /mnt or /mnt/backup when trying to run this command and as you can see from my previous posts it was not already mounted, and is not in use.
This lead me to believe that some software, namely some raid software was trying to control the disk. My suspicions were confirmed by the command dmraid -r which show that /dev/sdb (the disk I am working with) is being used or at least dmraid reports it is. I did nothing to add it to dmraid, simply put it into the system.

In response to some of your comments:
I do not wish to deactivate the raid that the system is mounted on, it is running fine. I want to be able to mount my new backup drive and start doing backups, but something on the system will not allow me to do that.

Here is the output of the command you recommended I run:
cat /proc/mdstat

Personalities : [raid1]
md1 : active raid1 sda1[0]
      104320 blocks [2/1] [U_]
     
md2 : active raid1 sda2[0]
      312464128 blocks [2/1] [U_]
     
unused devices: <none>

As you can see sdb is not being used here. But dmraid does appear to be trying to use it.

mdadm --detail /dev/md1

/dev/md1:
        Version : 00.90.01
  Creation Time : Wed Dec 27 15:44:08 2006
     Raid Level : raid1
     Array Size : 104320 (101.88 MiB 106.82 MB)
    Device Size : 104320 (101.88 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Mon Jan  8 01:12:41 2007
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0


    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       0        0       -1      removed
           UUID : f6e8afa8:ed3e2a22:7a111382:861c6592
         Events : 0.1299

mdadm --detail /dev/md2

/dev/md2:
        Version : 00.90.01
  Creation Time : Wed Dec 27 15:42:02 2006
     Raid Level : raid1
     Array Size : 312464128 (297.99 GiB 319.96 GB)
    Device Size : 312464128 (297.99 GiB 319.96 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sat Jan 13 15:48:06 2007
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0


    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       0        0       -1      removed
           UUID : 8d2f5716:67e355fd:555ee830:25531914
         Events : 0.778168

This tells me again that it is only using one disk, so again the only other thing on the system that I have found to be referencing sdb (the new disk) is dmraid.
I don't know what I need to do, I have exhausted my own resources.

Anyone who knows something about dmraid please comment. Is it possible to force dmraid to let go of sdb, I have tried several commands from the man pages and none of them have helped.
Anyone who knows SME's software raid please tell me if dmraid is used or not.

Thank you all for your help, this is a great community.
-Landon

Offline Gaston94

  • *****
  • 184
  • +0/-0
Add a hard drive
« Reply #9 on: January 14, 2007, 12:26:01 AM »
I remember I have encountered such error "mount: /dev/xx already mounted or /mnt/zzz busy. ", at a time and situation I shouldn't have.
I do not remember how I get rid of it, but morelikely, this was a reboot or a look at "lsof" output.

G.

Offline harshl

  • **
  • 32
  • +0/-0
Add a hard drive
« Reply #10 on: January 14, 2007, 11:05:09 AM »
(Solved)

My particular issue was just as I suspected software raid had taken ownership of the drive.
dmraid is set to autodetect disks at boot time it found my backup drive and decided to use it.
running the following commands allowed me to get back control of the drive:
dmraid -an (deactivate all unmounted dmraid drives)
dmraid -E -r /dev/sdb (clear metadata from the drive)

Once I cleared the metadata no drives were listed under dmraid anymore and I was able to mount the volume.

Hope this helps someone in the future, it was a painful experience.
-Landon

Offline sugarcube

  • *
  • 10
  • +0/-0
Re: Add a hard drive
« Reply #11 on: November 21, 2010, 01:42:16 PM »
Thanks harshl,

I am the one who ran over the very same problem and your solution helped a lot. I had fixed it by mounting the drives with the -f option, but that seemed not to be a perfect solution, as I kept getting unusual messages when unmounting the disks.
BTW - I am running SME 7.5 on a newly installed machine.

Regards

T.