LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (http://www.linuxquestions.org/questions/linux-software-2/)
-   -   Trouble booting a degraded RAID-1 array (http://www.linuxquestions.org/questions/linux-software-2/trouble-booting-a-degraded-raid-1-array-477904/)

aluchko 08-27-2006 08:03 PM

Trouble booting a degraded RAID-1 array
 
I've been attempting to migrate my running server to RAID-1 using another hard drive identical to the current one. The general approach I've used is to create new drive to mirror the old one, set up the new drive as a degraded RAID-1 array using mdadm, rsync the contents of the old drive to the new drive, then add the old drive to the new RAID array when everything is working.

fdisk -l reports
Code:

Disk /dev/hda: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start        End      Blocks  Id  System
/dev/hda1  *          1          13      104391  fd  Linux raid autodetect
/dev/hda2              14      30515  245007315  8e  Linux LVM

#Disk /dev/hdb: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start        End      Blocks  Id  System
/dev/hdb1  *          1          13      104391  fd  Linux raid autodetect
/dev/hdb2              14      30515  245007315  fd  Linux raid autodetect

The old disk is /dev/hda and as can be seen I've successfully migrated over the boot partition to run on a RAID device /dev/md1 (though I still boot off hda1 instead of md1).

On /dev/md0 I've created a logical group /dev/lvm-raid with two logical volumes, lvm0 (filesystem) and lvm1 (swap, yeah raid-0 would be better for this).
Code:

# lvscan
  ACTIVE            '/dev/VolGroup00/LogVol00' [232.12 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [1.50 GB] inherit
  ACTIVE            '/dev/lvm-raid/lvm0' [232.12 GB] inherit
  ACTIVE            '/dev/lvm-raid/lvm1' [1.50 GB] inherit
# pvdisplay /dev/md0
  --- Physical volume ---
  PV Name              /dev/md0
  VG Name              lvm-raid
  PV Size              233.66 GB / not usable 0
  Allocatable          yes
  PE Size (KByte)      4096
  Total PE              59816
  Free PE              9
  Allocated PE          59807
  PV UUID              9e8Ixn-xC4P-27qH-YrSP-KO8W-lfli-doMB34

But whenever I try to boot lvm0 using the following lines in grub
Code:

title Fedora Core raid (2.6.17-1.2174_FC5)
        root (hd0,0)
        kernel /vmlinuz-2.6.17-1.2174_FC5 ro root=/dev/lvm-raid/lvm0 rhgb quiet
quiet
        initrd /initrd-2.6.17-1.2174_FC5.img

I get the following error during boot
Code:

Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
2 logical volume(s) in volume group "VolGroup00" now active
mount: count not find filesystem '/dev/root'
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!

It seems to me that it's not detecting the lvm-raid volume group at startup.

I've tried rebuilding the boot image with
Code:

mkinitrd -v --with=raid1 /boot/initrd-2.6.17-1.2174_FC5.img 2.6.17-1.2174_FC5
as well as setting /dev/lvm-raid/lvm0 to be / in /ect/fstab but it doesn't seem to make any difference.

Does anyone know how I can successfully boot /dev/lvm-raid/lvm0 ?

thanks

ramram29 08-28-2006 03:16 PM

Reboot with a rescue CD then see if you can mount it manually.

After reboot type:

modprobe raid1
vgchange -a
mkdir /mnt/tmp
mount /dev/lvm-raid/lvm0 /mnt/tmp

Try mounting the other lv.

Do this with each drive.

Do it with RAID syncronized.

aluchko 08-28-2006 11:46 PM

Thanks for the reply,
Quote:

Originally Posted by ramram29
Reboot with a rescue CD then see if you can mount it manually.

After reboot type:

modprobe raid1

could not parse modules.dep (happened whenever I ran modprobe)
Quote:

vgchange -a
Code:

# vgchange -ay
2 logical volume(s) in volume group "VolGroup00" now active

Quote:

mkdir /mnt/tmp
mount /dev/lvm-raid/lvm0 /mnt/tmp
/dev/lvm-raid didn't exist though I was successfully able to mount /dev/md1 (the boot partition), /dev/VolGroup00 existed and was able to mount fine (this was expected as the old disk still boots fine).

aluchko 09-09-2006 10:26 PM

Just to let everyone know I've reposted the question at FedoraForum here

I'll update this thread as well if I find a solution.


All times are GMT -5. The time now is 09:24 AM.