how to migrate an existing system with RHEL5 OS to RAID1 (problem with '/dev/root' )
I am trying to migrate my existing system with one IDE disk , tools installation already done... without loosing informations and having to install once again every things, to RAID1 (soft) with a second IDE disk
I tried to do this using somme informations given on forums but i always have a kernel Panic at the end of boot What I did: The system is going down for system halt NOW! login as: root root's password: /usr/bin/xauth: creating new authority file /root/.Xauthority [root ~]# df -k . Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda1 15235040 6969416 7479244 49% / root ~]# sfdisk -l Disk /dev/hda: 155061 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 155061/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 1957 1958- 15727603+ 83 Linux /dev/hda2 1958 2479 522 4192965 82 Linux swap / Solaris /dev/hda3 2480 9728 7249 58227592+ 83 Linux /dev/hda4 0 - 0 0 0 Empty [root ~]# umount /local [root ~]# swapoff -a [root ~]# vi /etc/mdadm.conf DEVICE /dev/hd[ab][123] ARRAY /dev/md0 devices=/dev/hda1,/dev/hdb1 ARRAY /dev/md1 devices=/dev/hda2,/dev/hdb2 ARRAY /dev/md2 devices=/dev/hda3,/dev/hdb3 ~ [root ~]# vi /boot/grub/device.map # this device map was generated by anaconda (hd0) /dev/hdb (hd1) /dev/hda [root ~]# vi /boot/grub/grub.conf [root ~]# # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You do not have a /boot partition. This means that # all kernel and initrd paths are relative to /, eg. # root (hd0,0) # kernel /boot/vmlinuz-version ro root=/dev/hdb1 # initrd /boot/initrd-version.img #boot=/dev/hdb default=1 timeout=5 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title RAID Scientific Linux SL (2.6.18-194.26.1.el5PAE) root (hd1,0) kernel /boot/vmlinuz-2.6.18-194.26.1.el5PAE ro root=LABEL=/ selinux=0 rhgb quiet initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img title RAID Scientific Linux SL (2.6.18-194.26.1.el5) root (hd1,0) kernel /boot/vmlinuz-2.6.18-194.26.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet initrd /boot/initrd-2.6.18-194.26.1.el5.img title RAID Scientific Linux (2.6.18-194.3.1.el5) root (hd1,0) kernel /boot/vmlinuz-2.6.18-194.3.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet initrd /boot/initrd-2.6.18-194.3.1.el5.img title NON-RAID Scientific Linux SL (2.6.18-194.26.1.el5PAE) root (hd0,0) kernel /boot/vmlinuz-2.6.18-194.26.1.el5PAE ro root=LABEL=/ selinux=0 rhgb quiet initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img title NON-RAID Scientific Linux SL (2.6.18-194.26.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-194.26.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet initrd /boot/initrd-2.6.18-194.26.1.el5.img title NON-RAID Scientific Linux (2.6.18-194.3.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-194.3.1.el5 ro root=LABEL=/ selinux=0 rhgb quiet initrd /boot/initrd-2.6.18-194.3.1.el5.img ~ [root ~]# df -k . Filesystem 1K-blocks Used Available Use% Mounted on /dev/hdb1 15235040 7036288 7412372 49% / [root ~]# mdadm -C /dev/md0 --level=raid1 --raid-devices=2 --force missing /dev/hda1 mdadm: /dev/hda1 appears to contain an ext2fs file system size=15727600K mtime=Thu Mar 3 13:56:35 2011 Continue creating array? y mdadm: array /dev/md0 started. [root ~]# mdadm -C /dev/md2 --level=raid1 --raid-devices=2 --force missing /dev/hda3 mdadm: /dev/hda3 appears to contain an ext2fs file system size=58227592K mtime=Thu Mar 3 13:56:35 2011 Continue creating array? y mdadm: array /dev/md2 started. [root ~]# mdadm -C /dev/md1 --level=raid1 --raid-devices=2 --force missing /dev/hda2 mdadm: /dev/hda2 appears to be part of a raid array: level=raid1 devices=2 ctime=Wed Mar 2 17:11:39 2011 Continue creating array? y mdadm: array /dev/md1 started. [root ~]# mkinitrd --preload=raid1 --with=raid1 --builtin=raid1 --force-scsi-probe --force-raid-probe /boot/initrd-'uname -r'-raid.img 'uname -r' [root ~]# [root ~]# mkswap /dev/md1 Setting up swapspace version 1, size = 4293521 kB I modified my fstab: 01.# cat /etc/fstab 02.# /etc/fstab: static file system information. 03.# 04.# /dev/md0 / ext3 defaults 1 1 /dev/md2 /local ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/md1 swap swap defaults 0 0 test.iso /mnt/iso udf,iso9660 noauto,loop,owner,user,rw 0 0 [root ~]# mkfs.ext3 /dev/md0 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 1966080 inodes, 3931872 blocks 196593 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4026531840 120 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 39 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root ~]# [root ~]# mkfs.ext3 /dev/md2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 7290880 inodes, 14556880 blocks 727844 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 445 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root ~]# [root ~]# [root ~]# umount /local [root ~]# mkdir /mnt/hdb3 [root ~]# mount /dev/hdb3 /mnt/hdb3 [root ~]# mkdir /mnt/md2 [root ~]# mount /dev/md2 /mnt/md2 [root ~]# cp -dpRx /mnt/hdb3 /mnt/md2 [root ~]# mkdir /mnt/md0 [root ~]# mkdir /mnt/hdb1 [root ~]# mount /dev/md0 /mnt/md0 [root ~]# mount /dev/hdb1 /mnt/hdb1 [root ~]# cp -dpRx /mnt/hdb1/* /mnt/md0 [root ~]# grub Probing devices to guess BIOS drives. This may take a long time. GNU GRUB version 0.97 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.] grub> device (hd0) /dev/hda grub> root (hd0,0) Filesystem type is ext2fs, partition type 0x83 grub> setup (hd0) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2 /boot/grub/grub. conf"... succeeded Done. grub> Boot the system and fail with kenel panic booting 'Raid ....... root (hd0,0) file system type is ext2fs, partition type 0x83 Kernel /boot/vmlinuz.... .......... initrd /boot/initrd..... Memory for crash kernel.... red Hart nash version ..... hub.... mount : could not find file system /dev/root setuproot : moving /dev failed : no such file or directory setuproot : error mounting /proc : no such file or directory setuproot : error mounting /sys : no such file or directory switchroot :mount failed : no such file or directory kernelpanic - not syncing : attempted to kill init Please Help... |
I see one issue. Your mkinitrd command:
mkinitrd --preload=raid1 --with=raid1 --builtin=raid1 --force-scsi-probe --force-raid-probe /boot/initrd-'uname -r'-raid.img 'uname -r' will create one these .img files: /boot/initrd-2.6.18-194.26.1.el5PAE-raid.img or /boot/initrd-2.6.18-194.26.1.el5-raid.img (Depends on which kernel you are running when when the mkinitrd command is run) But I don't see either of these names in your grub.conf file. These are all the initrd lines from your grub.conf: initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img initrd /boot/initrd-2.6.18-194.26.1.el5.img initrd /boot/initrd-2.6.18-194.3.1.el5.img initrd /boot/initrd-2.6.18-194.26.1.el5PAE.img initrd /boot/initrd-2.6.18-194.26.1.el5.img initrd /boot/initrd-2.6.18-194.3.1.el5.img It appears you need to modify one of the boot selection entries' initrd line in grub.conf to include '-raid', like: initrd /boot/initrd-2.6.18-194.26.1.el5PAE-raid.img or initrd /boot/initrd-2.6.18-194.26.1.el5-raid.img And select that entry to boot at the grub boot loader screen. Otherwise, the raid params from your mkinitrd command will not be used, and your system will still try to mount root (/) on the single IDE disk, not the raid array. Hope this helps |
All times are GMT -5. The time now is 02:40 PM. |