LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 06-20-2006, 12:29 AM   #1
Jeiku
Member
 
Registered: Jul 2005
Posts: 64

Rep: Reputation: 18
Mirroring drives without RAID on Slackware 10.2


Hi all,

I have the following setup:

SCSI controller with 2x70GB hard drives attached.

I want these drives to act like they were on a RAID controller. Is there some software that will do this in Linux?
I want the drives to be constantly mirrored, so rsync is out of the question.

Thanks for any info!

Jake
 
Old 06-20-2006, 01:10 AM   #2
Yalla-One
Member
 
Registered: Oct 2004
Location: Norway
Distribution: Slackware, CentOS
Posts: 641

Rep: Reputation: 36
Have you looked into the Logical Volume Manager, LVM ?
 
Old 06-20-2006, 02:57 AM   #3
Jeiku
Member
 
Registered: Jul 2005
Posts: 64

Original Poster
Rep: Reputation: 18
Hi,

No I haven't. Have you managed to do this with LVM?
 
Old 06-21-2006, 05:19 AM   #4
redcane
Member
 
Registered: Aug 2003
Posts: 31

Rep: Reputation: 15
LVM can do mirroring. Otherwise you can use the Linux Software RAID system.
 
Old 06-22-2006, 03:08 AM   #5
Jeiku
Member
 
Registered: Jul 2005
Posts: 64

Original Poster
Rep: Reputation: 18
Thanks for the replies =]
 
Old 06-23-2006, 08:47 AM   #6
Slim Backwater
Member
 
Registered: Nov 2005
Distribution: Slackware 10.2 2.6.20
Posts: 68

Rep: Reputation: 15
Quote:
Originally Posted by redcane
LVM can do mirroring. Otherwise you can use the Linux Software RAID system.
Wait, LVM can do mirroring? I have LVM running on Software RAID, is that what you mean? I created a mirror (md0) with mdadm, then made a physical volume on the md0 device. (from that PV, I created a volume group, and then logical volumes). If LVM can do mirroring itself, is it 'better' than lvm on RAID?

As for the OP, check out the software Raid, mdadm and all. For your reference, here are my notes from my last install. Some stuff won't apply (like umounting /pub my own nfs mount) or make sense, and I only wrote this, I haven't followed it yet to fix errors, but maybe it will help.

I was using IDE drives so the references to hda and hdb, but as you are using SCSI drives, they will probably be sda and sdb. The most annoying things are the old version of LILO in Slackware, which doesn't support degraded RAID-1, and the default init-rd is missing md entries in /dev.

This will cover creating a sofware RAID-1 environment to run Slackware on. The two devices are hda and hdb. They do not need to be identical, only the portions that will be mirrors.
  1. Install Slackware onto hda. Probably should apply patches, compile a new kernel, etc.
    My custom compiled kernel was installed as /boot/vmlinuz-20060606-2.6.16.20 which included all the modules I needed to boot built-in (sata drivers, reiserfs, md-mod, dm-mod, raid1).
  2. Install the LVM2 tools from Disc2
  3. Probably need to modprobe dm-mod
  4. Create a three partitions on hdb: Probably have to reboot after creating the partitions
    • hdb1 128MB type FD (for /boot mirror)
    • hdb2 2048 type FD (for swap mirror)
    • hdb3 Rest type FD (for LVM PV mirror)
  5. Create the degraded RAID-1 arrays
    • mdadm --create --level=raid1 --raid-devices=2 /dev/md0 missing /dev/hdb1
    • mdadm --create --level=raid1 --raid-devices=2 /dev/md1 missing /dev/hdb2
    • mdadm --create --level=raid1 --raid-devices=2 /dev/md2 missing /dev/hdb3
  6. Probably could stand to change the partition type to FD to have the raid autobuild. There's notes about having to run raidstart --all otherwise
    • Make filesystems on the RAID devices
    • mkfs -t ext2 /dev/md0
    • mkswap /dev/md1
    • pvcreate /dev/md2
    • vgcreate main /dev/md2
    • lvcreate main --size 256M --name root
    • lvcreate main --size 1024M --name opt
    • lvcreate main --size 256M --name tmp
    • lvcreate main --size 4096M --name usr
    • lvcreate main --size 256M --name var
    • lvcreate main --size 4096M --name home
    • mkreiserfs -q /dev/main/root
    • mkreiserfs -q /dev/main/opt
    • mkreiserfs -q /dev/main/tmp
    • mkreiserfs -q /dev/main/usr
    • mkreiserfs -q /dev/main/var
    • mkreiserfs -q /dev/main/home
  7. Mount all those partitions under /mnt/hd
    • mount /dev/main/root /mnt/hd
    • mkdir /mnt/hd/opt
    • mkdir /mnt/hd/tmp
    • mkdir /mnt/hd/usr
    • mkdir /mnt/hd/var
    • mkdir /mnt/hd/home
    • mount /dev/main/opt /mnt/hd/opt
    • mount /dev/main/tmp /mnt/hd/tmp
    • mount /dev/main/usr /mnt/hd/usr
    • mount /dev/main/var /mnt/hd/var
    • mount /dev/main/home /mnt/hd/home
  8. Create a mountpoint for, and mount /boot
    • mkdir /mnt/hd/boot
    • mount /dev/md0 /mnt/hd/boot
Make an initrd
  1. If using a generic kernel
    • mkinitrd -c -k 2.6.16.18 -m libata:reiserfs:sata_nv:sata_sil:md-mod:dm-mod:raid1
  2. Or with a custom Kernel
    • mkinitrd -c -k 2.6.16.20
Edit the initrd
  1. cd /boot/initrd-tree
  2. rm dev/md[0-4]
  3. mknod --mode=0660 dev/md0 b 9 0
  4. mknod --mode=0660 dev/md1 b 9 1
  5. mknod --mode=0660 dev/md2 b 9 2
  6. mknod --mode=0660 dev/md3 b 9 3
  7. mknod --mode=0660 dev/md4 b 9 4
  8. mkdir sbin
  9. cp /sbin/lvm.static sbin
Code:
( cd sbin ; rm -rf lvs )
( cd sbin ; ln -sf lvm lvs )
( cd sbin ; rm -rf pvs )
( cd sbin ; ln -sf lvm pvs )
( cd sbin ; rm -rf vgs )
( cd sbin ; ln -sf lvm vgs )
( cd sbin ; rm -rf vgck )
( cd sbin ; ln -sf lvm vgck )
( cd sbin ; rm -rf lvdisplay )
( cd sbin ; ln -sf lvm lvdisplay )
( cd sbin ; rm -rf pvchange )
( cd sbin ; ln -sf lvm pvchange )
( cd sbin ; rm -rf pvcreate )
( cd sbin ; ln -sf lvm pvcreate )
( cd sbin ; rm -rf vgmknodes )
( cd sbin ; ln -sf lvm vgmknodes )
( cd sbin ; rm -rf pvremove )
( cd sbin ; ln -sf lvm pvremove )
( cd sbin ; rm -rf pvresize )
( cd sbin ; ln -sf lvm pvresize )
( cd sbin ; rm -rf lvmchange )
( cd sbin ; ln -sf lvm lvmchange )
( cd sbin ; rm -rf lvmsadc )
( cd sbin ; ln -sf lvm lvmsadc )
( cd sbin ; rm -rf vgcfgrestore )
( cd sbin ; ln -sf lvm vgcfgrestore )
( cd sbin ; rm -rf lvmdiskscan )
( cd sbin ; ln -sf lvm lvmdiskscan )
( cd sbin ; rm -rf pvdisplay )
( cd sbin ; ln -sf lvm pvdisplay )
( cd sbin ; rm -rf lvmsar )
( cd sbin ; ln -sf lvm lvmsar )
( cd sbin ; rm -rf lvscan )
( cd sbin ; ln -sf lvm lvscan )
( cd sbin ; rm -rf vgchange )
( cd sbin ; ln -sf lvm vgchange )
( cd sbin ; rm -rf vgcreate )
( cd sbin ; ln -sf lvm vgcreate )
( cd sbin ; rm -rf pvmove )
( cd sbin ; ln -sf lvm pvmove )
( cd sbin ; rm -rf pvscan )
( cd sbin ; ln -sf lvm pvscan )
( cd sbin ; rm -rf vgexport )
( cd sbin ; ln -sf lvm vgexport )
( cd sbin ; rm -rf vgextend )
( cd sbin ; ln -sf lvm vgextend )
( cd sbin ; rm -rf vgimport )
( cd sbin ; ln -sf lvm vgimport )
( cd sbin ; rm -rf vgscan )
( cd sbin ; ln -sf lvm vgscan )
( cd sbin ; rm -rf vgmerge )
( cd sbin ; ln -sf lvm vgmerge )
( cd sbin ; rm -rf vgsplit )
( cd sbin ; ln -sf lvm vgsplit )
( cd sbin ; rm -rf lvchange )
( cd sbin ; ln -sf lvm lvchange )
( cd sbin ; rm -rf lvcreate )
( cd sbin ; ln -sf lvm lvcreate )
( cd sbin ; rm -rf lvextend )
( cd sbin ; ln -sf lvm lvextend )
( cd sbin ; rm -rf vgreduce )
( cd sbin ; ln -sf lvm vgreduce )
( cd sbin ; rm -rf vgrename )
( cd sbin ; ln -sf lvm vgrename )
( cd sbin ; rm -rf vgremove )
( cd sbin ; ln -sf lvm vgremove )
( cd sbin ; rm -rf vgconvert )
( cd sbin ; ln -sf lvm vgconvert )
( cd sbin ; rm -rf vgcfgbackup )
( cd sbin ; ln -sf lvm vgcfgbackup )
( cd sbin ; rm -rf lvreduce )
( cd sbin ; ln -sf lvm lvreduce )
( cd sbin ; rm -rf lvrename )
( cd sbin ; ln -sf lvm lvrename )
( cd sbin ; rm -rf lvremove )
( cd sbin ; ln -sf lvm lvremove )
( cd sbin ; rm -rf lvresize )
( cd sbin ; ln -sf lvm lvresize )
( cd sbin ; rm -rf vgdisplay )
( cd sbin ; ln -sf lvm vgdisplay )
Update the initrd.gz
  1. cd /boot
  2. mkinitrd
Edit lilo.conf, add a new entry to bootstrap the RAID devices
  • Don't change the boot= at this time, wait until booted off the mirror
Code:
vi /etc/lilo.conf
image = /boot/vmlinuz-20060606-2.6.16.20
  root = /dev/main/root
  initrd = /boot/initrd.gz
  label = nRAID-2.6.16.18
  read-only
Copy everything over to the new home
    • umount /pub
    • rsync -av --exclude /mnt/hd --exclude /proc --exclude /sys / /mnt/hd/
      • or cp -avx / /mnt/hd
      • cp -avx /boot /mnt/hd/
Edit new fstab
  1. vi /mnt/hd/etc/fstab
Code:
/dev/md1         swap             swap        defaults         0   0
/dev/main/root   /                reiserfs    defaults         0   0
/dev/md0         /boot            ext2        defaults         1   1
/dev/cdrom       /mnt/cdrom       auto        noauto,owner,ro  0   0
/dev/fd0         /mnt/floppy      auto        noauto,owner     0   0
devpts           /dev/pts         devpts      gid=5,mode=620   0   0
proc             /proc            proc        defaults         0   0
/dev/main/var    /var       reiserfs       defaults    0    0
/dev/main/opt    /opt       reiserfs       defaults    0    0
/dev/main/tmp    /tmp       reiserfs       defaults    0    0
/dev/main/usr    /usr       reiserfs       defaults    0    0
/dev/main/var    /var       reiserfs       defaults    0    0
/dev/main/home   /home      reiserfs       defaults    0    0
scrambled:/home/brian /pub        nfs         defaults 0    0
Reboot and see if it boots off the raid!
If it boots, update the boot= line in lilo.conf
  1. vi /etc/lilo.conf
    • boot=/dev/md0
    • raid-extra-boot=mbr
  2. delete stuff from /boot that is no longer relevant (old kernels, old initrds, etc.)
Hard work, LILO 22.5.9 (with Slackware 10.2 and -current as of 2006-06-06) can't handle the degraded RAID-1. Download, compile and install Lilo 22.7.1 (atleast).
Run lilo
  1. lilo
  2. reboot and try to boot off the second disk.
Create mirror partitions
  1. Wipe out the partition table (and some) on the old drive
    • dd if=/dev/zero of=/dev/sda bs=512 count=2000 # BE VERY CAREFUL!
  2. Reboot
  3. Duplicate the partition table from the second drive to the first drive
    • sfdisk -d /dev/sdb | sfdisk /dev/sda # BE VERY CAREFUL!
  4. Reboot
  5. Add the new partitions to the correct RAID array
    • mdadm --detail /dev/md0
    • mdadm /dev/md0 --add /dev/sda1
    • mdadm --detail /dev/md1
    • mdadm /dev/md1 --add /dev/sda2
    • mdadm --detail /dev/md2
    • mdadm /dev/md2 --add /dev/sda3
  6. run lilo and see that both mbr's get updated (SDA and SDB)

HTH.
 
Old 01-23-2008, 04:24 PM   #7
BananaRepublic
LQ Newbie
 
Registered: Jan 2008
Posts: 4

Rep: Reputation: 0
I'm bit lost trying to figure what I really need to accomplish in my situation.

I already created a RAID1 for my boot partition, and it seems to boot from the /dev/md0 as expected at the root= line.

However, lilo.conf has this line:

Code:
boot= /dev/sda
Which isn't really what I want as if this particular hard drive fails, the system will not boot at all.

I tried to add this:

Code:
raid-extra-boot="/dev/sdb,/dev/sdc,/dev/sdd"
but this didn't make any difference.

Also, this won't work:
Code:
boot= /dev/md0
I do not have a initrd, and am not sure if this is required in my case.

What should I look for to get LILO to boot from RAID1 array completely?

Last edited by BananaRepublic; 01-23-2008 at 05:04 PM.
 
Old 01-23-2008, 04:58 PM   #8
mRgOBLIN
Slackware Contributor
 
Registered: Jun 2002
Location: New Zealand
Distribution: Slackware
Posts: 999

Rep: Reputation: 231Reputation: 231Reputation: 231
Did you run lilo after making the changes to lilo.conf?

You might find some useful info here http://www.userlocal.com/articles/ra...ackware-12.php
 
Old 01-23-2008, 05:42 PM   #9
BananaRepublic
LQ Newbie
 
Registered: Jan 2008
Posts: 4

Rep: Reputation: 0
Quote:
Originally Posted by mRgOBLIN View Post
Did you run lilo after making the changes to lilo.conf?

You might find some useful info here http://www.userlocal.com/articles/ra...ackware-12.php
Excellent- that got everything working correctly. I didn't realize I had to run lilo whenever I edit the lilo.conf.

Now, the mdadm --detail indicates that array is still degraded even though all HDs are now plugged in now. Am I supposed to do something to get it reconstructed?


EDIT: I was under the mistaken impression that mdadm would automatically detect the HDs that was re-plugged back after the test and reconstruct it. This is not the case, apparently. To reconstruct the array:

Code:
mdadm --manage --add /dev/md[X] /dev/[X]

Last edited by BananaRepublic; 01-24-2008 at 12:28 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
SATAII RAID(1) Mirroring industris Slackware 2 09-19-2005 02:52 AM
mirroring hard drives Kruncher Linux - General 1 08-22-2005 03:00 PM
Mirroring 2 drives - one already partitioned. PAarcher Linux - Newbie 6 01-28-2005 02:28 PM
Mirroring Drives dmedici Fedora - Installation 0 05-28-2004 09:49 AM
Raid 1 mirroring in RH7.3 prue3 Linux - Software 1 07-03-2002 05:49 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 02:42 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration