-   Linux - Enterprise (
-   -   Migrating RHEL4 OS to EMC SAN with PowerPath (

jbilderb 03-22-2010 02:27 PM

Migrating RHEL4 OS to EMC SAN with PowerPath
We are currently working on a boot-to-san project, and we're trying to find out if we can take our current RHEL boxes, and migrate them to boot-to-san, or if we have to do a fresh install of the OS and all the applications (Oracle mainly).

Working with a test box, I can load RHEL on a host, dd it to the san, and it will boot-from-san just fine. However, if I have PowerPath installed and working, and then dd it to the san, it will not work.

If there is anyone that can help me, thank you.

anomie 03-22-2010 08:00 PM

How does it "not work" exactly? How far along in the boot process do you get, and what error messages do you see?

jbilderb 03-24-2010 01:01 PM

And, of course, 1 hour after I posted this, we figured out what went wrong, and have our box booting to SAN.


Just as an FYI, you have to change the /boot line of fstab to point to the PowerPath pseudoname of the boot partition on the SAN.

So this line:
LABEL=/boot /boot ext3 defaults 1 2
will read this:
/dev/emcpowerXX /boot ext3 defaults 1 2
with the first X being the PowerPath pseudoname for the LUN, and the second X being the partition # (ours was emcpowerf1)

I'm writing up the entire process now, if anyone wants a copy of it, please let me know.

anomie 03-26-2010 08:01 PM

Makes sense. Thanks for following up on your thread.

(Note that you could also update your /boot label to point to the emcpower device. But your solution is fine.)

jbilderb 03-29-2010 09:24 AM

Sorry if I wasn't clear, but that is exactly what we did.

jbilderb 04-19-2010 10:05 AM

Further adventures with bfs migration:
Since we're moving ALL of our RedHat boxen to either bfs, or P2V, I've been having a lot of fun with all sorts of problems. My definition of "a lot of fun" of course includes the fact that I've only been doing Linux for about 2 months now.

Latest adventure: After using dd to copy the local hardrive to the LUN on the SAN, it couldn't find the volume group, which caused a kernel panic. This has only happened on one box so far, but here is how I finally made it work:
I did a fresh install on a different box, upgraded that box to the same kernel as the box we were having trouble with, and then copied the initrd .img file to the bad box.

When the bad box booted up, it started looking for the volume group name from the fresh install. From linux rescue, I mounted all the partitions into the bad box, did a chroot to the /mnt/sysimage, and from there was able to do a mkinitrd, which directed the initrd to the correct volume groups, and the box booted succesfully.

So, to sum up the fix:
1) dd the local hard drive to the SAN
2) edit the /boot line of the fstab to look to the boot partition on the SAN (/dev/sdc3 in this case) instead of LABEL=/boot
3) if the system has a kernel panic, get into linux rescue, mount the system's partitions to the corresponding place in the /mnt/sysimage (for example: mount /dev/sdc3 to /mnt/sysimage/boot, or /dev/VolGroup_ID_31829/usr to /mnt/sysimage/usr, etc)
4) chroot to /mnt/sysimage
5) copy the initrd .img from the fresh install to /boot on the SAN drive
6) mkinitrd
7) reboot

You may not need to copy the initrd .img if you can do the mkinitrd without that step, but I wasn't even able to do that.

It was just a bit difficult finding the rpm for the kernel, because it was an older kernel (2.6.9-67.0.15.ELsmp)

My only remaining concern is about how this will affect the Oracle databases on this server.
I hope this is somewhat clearer than mud for folks reading it.

All times are GMT -5. The time now is 10:37 AM.