[SOLVED] Migrating RHEL4 OS to EMC SAN with PowerPath
Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
We are currently working on a boot-to-san project, and we're trying to find out if we can take our current RHEL boxes, and migrate them to boot-to-san, or if we have to do a fresh install of the OS and all the applications (Oracle mainly).
Working with a test box, I can load RHEL on a host, dd it to the san, and it will boot-from-san just fine. However, if I have PowerPath installed and working, and then dd it to the san, it will not work.
And, of course, 1 hour after I posted this, we figured out what went wrong, and have our box booting to SAN.
Just as an FYI, you have to change the /boot line of fstab to point to the PowerPath pseudoname of the boot partition on the SAN.
So this line:
LABEL=/boot /boot ext3 defaults 1 2
will read this:
/dev/emcpowerXX /boot ext3 defaults 1 2
with the first X being the PowerPath pseudoname for the LUN, and the second X being the partition # (ours was emcpowerf1)
I'm writing up the entire process now, if anyone wants a copy of it, please let me know.
Further adventures with bfs migration:
Since we're moving ALL of our RedHat boxen to either bfs, or P2V, I've been having a lot of fun with all sorts of problems. My definition of "a lot of fun" of course includes the fact that I've only been doing Linux for about 2 months now.
Latest adventure: After using dd to copy the local hardrive to the LUN on the SAN, it couldn't find the volume group, which caused a kernel panic. This has only happened on one box so far, but here is how I finally made it work:
I did a fresh install on a different box, upgraded that box to the same kernel as the box we were having trouble with, and then copied the initrd .img file to the bad box.
When the bad box booted up, it started looking for the volume group name from the fresh install. From linux rescue, I mounted all the partitions into the bad box, did a chroot to the /mnt/sysimage, and from there was able to do a mkinitrd, which directed the initrd to the correct volume groups, and the box booted succesfully.
So, to sum up the fix:
1) dd the local hard drive to the SAN
2) edit the /boot line of the fstab to look to the boot partition on the SAN (/dev/sdc3 in this case) instead of LABEL=/boot
3) if the system has a kernel panic, get into linux rescue, mount the system's partitions to the corresponding place in the /mnt/sysimage (for example: mount /dev/sdc3 to /mnt/sysimage/boot, or /dev/VolGroup_ID_31829/usr to /mnt/sysimage/usr, etc)
4) chroot to /mnt/sysimage
5) copy the initrd .img from the fresh install to /boot on the SAN drive
You may not need to copy the initrd .img if you can do the mkinitrd without that step, but I wasn't even able to do that.
It was just a bit difficult finding the rpm for the kernel, because it was an older kernel (2.6.9-67.0.15.ELsmp)
My only remaining concern is about how this will affect the Oracle databases on this server.
I hope this is somewhat clearer than mud for folks reading it.