LinuxQuestions.org
Did you know LQ has a Linux Hardware Compatibility List?
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices



Reply
 
Search this Thread
Old 03-07-2011, 10:23 AM   #1
cdriba
LQ Newbie
 
Registered: Mar 2011
Posts: 1

Rep: Reputation: 0
how to migrate an existing system with RHEL5 OS to RAID1 (problem with '/dev/root' )


I am trying to migrate my existing system with one IDE disk , tools installation already done... without loosing informations and having to install once again every things, to RAID1 (soft) with a second IDE disk

I tried to do this using somme informations given on forums but i always have a kernel Panic at the end of boot
What I did:

The system is going down for system halt NOW!
login as: root
root's password:
/usr/bin/xauth: creating new authority file /root/.Xauthority
[root ~]# df -k .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda1 15235040 6969416 7479244 49% /


root ~]# sfdisk -l

Disk /dev/hda: 155061 cylinders, 16 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/255/63 (instead of 155061/16/63).
For this listing I'll assume that geometry.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/hda1 * 0+ 1957 1958- 15727603+ 83 Linux
/dev/hda2 1958 2479 522 4192965 82 Linux swap / Solaris
/dev/hda3 2480 9728 7249 58227592+ 83 Linux
/dev/hda4 0 - 0 0 0 Empty

[root ~]# umount /local
[root ~]# swapoff -a
[root ~]# vi /etc/mdadm.conf

DEVICE /dev/hd[ab][123]
ARRAY /dev/md0 devices=/dev/hda1,/dev/hdb1
ARRAY /dev/md1 devices=/dev/hda2,/dev/hdb2
ARRAY /dev/md2 devices=/dev/hda3,/dev/hdb3
~

[root ~]# vi /boot/grub/device.map

# this device map was generated by anaconda
(hd0) /dev/hdb
(hd1) /dev/hda

[root ~]# vi /boot/grub/grub.conf
[root ~]#
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You do not have a /boot partition. This means that
# all kernel and initrd paths are relative to /, eg.
# root (hd0,0)
# kernel /boot/vmlinuz-version ro root=/dev/hdb1
# initrd /boot/initrd-version.img
#boot=/dev/hdb
default=1
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title RAID Scientific Linux SL (2.6.18-194.26.1.el5PAE)
root (hd1,0)
kernel /boot/vmlinuz-2.6.18-194.26.1.el5PAE ro root=LABEL=/ selinux=0 rhgb quiet
initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img
title RAID Scientific Linux SL (2.6.18-194.26.1.el5)
root (hd1,0)
kernel /boot/vmlinuz-2.6.18-194.26.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet
initrd /boot/initrd-2.6.18-194.26.1.el5.img
title RAID Scientific Linux (2.6.18-194.3.1.el5)
root (hd1,0)
kernel /boot/vmlinuz-2.6.18-194.3.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet
initrd /boot/initrd-2.6.18-194.3.1.el5.img
title NON-RAID Scientific Linux SL (2.6.18-194.26.1.el5PAE)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.26.1.el5PAE ro root=LABEL=/ selinux=0 rhgb quiet
initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img
title NON-RAID Scientific Linux SL (2.6.18-194.26.1.el5)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.26.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet
initrd /boot/initrd-2.6.18-194.26.1.el5.img
title NON-RAID Scientific Linux (2.6.18-194.3.1.el5)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.3.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet
initrd /boot/initrd-2.6.18-194.3.1.el5.img
~

[root ~]# df -k .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hdb1 15235040 7036288 7412372 49% /
[root ~]# mdadm -C /dev/md0 --level=raid1 --raid-devices=2 --force missing /dev/hda1
mdadm: /dev/hda1 appears to contain an ext2fs file system
size=15727600K mtime=Thu Mar 3 13:56:35 2011
Continue creating array? y
mdadm: array /dev/md0 started.
[root ~]# mdadm -C /dev/md2 --level=raid1 --raid-devices=2 --force missing /dev/hda3
mdadm: /dev/hda3 appears to contain an ext2fs file system
size=58227592K mtime=Thu Mar 3 13:56:35 2011
Continue creating array? y
mdadm: array /dev/md2 started.
[root ~]# mdadm -C /dev/md1 --level=raid1 --raid-devices=2 --force missing /dev/hda2
mdadm: /dev/hda2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed Mar 2 17:11:39 2011
Continue creating array? y
mdadm: array /dev/md1 started.

[root ~]# mkinitrd --preload=raid1 --with=raid1 --builtin=raid1 --force-scsi-probe --force-raid-probe /boot/initrd-'uname -r'-raid.img 'uname -r'
[root ~]#

[root ~]# mkswap /dev/md1
Setting up swapspace version 1, size = 4293521 kB

I modified my fstab:
01.# cat /etc/fstab
02.# /etc/fstab: static file system information.
03.#
04.#
/dev/md0 / ext3 defaults 1 1
/dev/md2 /local ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/md1 swap swap defaults 0 0
test.iso /mnt/iso udf,iso9660 noauto,loop,owner,user,rw 0 0


[root ~]# mkfs.ext3 /dev/md0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1966080 inodes, 3931872 blocks
196593 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4026531840
120 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root ~]#




[root ~]# mkfs.ext3 /dev/md2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
7290880 inodes, 14556880 blocks
727844 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
445 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root ~]#
[root ~]#
[root ~]# umount /local


[root ~]# mkdir /mnt/hdb3
[root ~]# mount /dev/hdb3 /mnt/hdb3
[root ~]# mkdir /mnt/md2
[root ~]# mount /dev/md2 /mnt/md2
[root ~]# cp -dpRx /mnt/hdb3 /mnt/md2

[root ~]# mkdir /mnt/md0
[root ~]# mkdir /mnt/hdb1
[root ~]# mount /dev/md0 /mnt/md0
[root ~]# mount /dev/hdb1 /mnt/hdb1
[root ~]# cp -dpRx /mnt/hdb1/* /mnt/md0



[root ~]# grub
Probing devices to guess BIOS drives. This may take a long time.


GNU GRUB version 0.97 (640K lower / 3072K upper memory)

[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the possible
completions of a device/filename.]

grub> device (hd0) /dev/hda

grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.
conf"... succeeded
Done.

grub>


Boot the system

and fail with kenel panic


booting 'Raid .......
root (hd0,0)
file system type is ext2fs, partition type 0x83
Kernel /boot/vmlinuz....
..........
initrd /boot/initrd.....

Memory for crash kernel....
red Hart nash version .....
hub....
mount : could not find file system /dev/root
setuproot : moving /dev failed : no such file or directory
setuproot : error mounting /proc : no such file or directory
setuproot : error mounting /sys : no such file or directory
switchroot :mount failed : no such file or directory
kernelpanic - not syncing : attempted to kill init



Please Help...
 
Old 03-09-2011, 02:21 AM   #2
anotherlinuxuser
Member
 
Registered: Jan 2007
Location: Alberta Canada
Distribution: Fedora/Redhat/CentOS
Posts: 64

Rep: Reputation: 19
I see one issue. Your mkinitrd command:
mkinitrd --preload=raid1 --with=raid1 --builtin=raid1 --force-scsi-probe --force-raid-probe /boot/initrd-'uname -r'-raid.img 'uname -r'

will create one these .img files:

/boot/initrd-2.6.18-194.26.1.el5PAE-raid.img
or
/boot/initrd-2.6.18-194.26.1.el5-raid.img
(Depends on which kernel you are running when when the mkinitrd command is run)

But I don't see either of these names in your grub.conf file.
These are all the initrd lines from your grub.conf:

initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img
initrd /boot/initrd-2.6.18-194.26.1.el5.img
initrd /boot/initrd-2.6.18-194.3.1.el5.img
initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img
initrd /boot/initrd-2.6.18-194.26.1.el5.img
initrd /boot/initrd-2.6.18-194.3.1.el5.img
It appears you need to modify one of the boot selection entries' initrd line in grub.conf to include '-raid', like:
initrd /boot/initrd-2.6.18-194.26.1.el5PAE-raid.img
or
initrd /boot/initrd-2.6.18-194.26.1.el5-raid.img

And select that entry to boot at the grub boot loader screen.

Otherwise, the raid params from your mkinitrd command will not be used, and your system will still try to mount root (/) on the single IDE disk, not the raid array.

Hope this helps

Last edited by anotherlinuxuser; 03-09-2011 at 02:34 AM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Adding a root mirror drive to existing RHEL5 system wkrossman Red Hat 5 11-11-2010 09:31 AM
Migrate from default LVM installation to RAID1 (Ubuntu Server 9.10) vwtech Ubuntu 0 04-19-2010 03:07 PM
Migrate to a raid1 with mdadm Ossar Linux - Software 2 08-19-2008 12:55 PM
Converting existing RAID1 (where /root, /swap, /usr, and /var reside) to RAID10 the_answer_is_no Linux - Newbie 5 06-02-2008 10:17 AM
Boot Error: Root file system /dev/root adtomar Linux - Networking 0 12-27-2004 11:50 AM


All times are GMT -5. The time now is 06:32 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration