LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Password
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.

Notices


Reply
  Search this Thread
Old 07-05-2012, 12:41 AM   #1
dezavu
LQ Newbie
 
Registered: Jun 2011
Posts: 28

Rep: Reputation: Disabled
How to migrate from Powerpath to Multipath (Boot LUN is on SAN)?


Hi,

In our environment we are implementing UCS blades for RedHat Linux (RHEL5) operation systems. Boot LUN is on SAN (Hitachi), in gold image we are with Powerpath, but Hitachi wants us to go for multipath
only. Now I needs to export the rootvg (which is again not possible in normal circumstance) before uninstall the Powerpath software and reboot the server with multipath such as external devices will
come up with multipath-device.
I am scared about how to manage with rootvg (boot LUN), and how do I move forward to uninstal Powerpath & go with multipath?
It will be great help if anyone come up with any defined process/idea!

Thanks & Regards,
Vijay

Last edited by dezavu; 07-05-2012 at 12:44 AM.
 
Old 07-06-2012, 09:53 AM   #2
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
I'm not sure there is a hard requirement to use Multipath for Hitachi storage. Powerpath, although an EMC Product, work should work just as well for Hitach based storage as for EMC as its point is to give you multiple paths to the disks like Linux native multipath does.

That being said however, you might want to go to multipath to get rid of Powerpath because the latter is a licensed software and EMC on finding out you're using a Hitachi might decide to audit you and try to claim you owe them tons of money for your Powerpath installs. That happened to us.

We did migrate from Powerpath to Hitachi's CCI (horcm/inqraid) software and Linux' native multipath though. In our setup we did not use the SAN for boot though - we use internal storage on each host for the OS/boot and use the SAN storage for applications/DBs.

As a basis to get you started I'm including what we did a couple of years back in our migration. Hopefully it will give you ideas. Note that I haven't gone through this today so it is possible there are other steps or modified steps not listed and of course you'll have to figure out how to deal with SAN boot. This is just a cut and paste of the document we developed for use back then:

Linux VSP Setups

Multipathing.

1. Ensure the CCI software is installed under /HORCM by running pairdisplay -v to see if the HORCM Pair Display command is responding. If not, install the CCI software from the /root/patch/hitachi/hitachi_cli.tar file on atubks01.
a. Copy the file to /root/Packages on target system.
b. Create subdirectory /root/Packages/crash/hitachi_cli
c. cd to the above hitachi_cli directory.
d. Run “tar xvf ../hiachi_cli.tar” to extract into the directory.
e. Run “./RMinstsh.txt” to start the installer.
f. Will display:
******* Confirmation for New Introduction of the HORCM.*******
Please specify a directory(recommends except '/') for the installation.
For continue -> please enter a 'directory'.
For cancel -> please enter 'exit'
Type “/” and hit return.
g. Should display:
cpio -idmu < /root/Packages/hitachi_cli/LINUX/RMHORC
20915 blocks
The following model was installed to '/HORCM' under as '/HORCM'.
When you have to uninstall of the HORCM,please executes the following RMuninst command.
When you also have to be installing by floppy or tape,please executes the following RMinstsh command.
[ Model : RAID-Manager/Linux Ver&Rev: 01-24-03/13 ]
--------------------------------------------------------------------------------
etc horcminstall.sh horcmuninstall.sh log log0 log1 usr
--------------------------------------------------------------------------------
[NOTE]:
When you uses the following software for backup etc,please install RMLIB
by using RLinstsh of '../RL' directory under on the CD.
- Enterprise SnapShot software which is provided by BMC.
- Remote performance monitor which is provided by HP.
- Omniback software which is provided by HP.
- RAID management software which is provided by HP or other ISV.
At this point it should be done. Run “pairdisplay –v” to verify it works.
It should display:
pairdisplay: requires '-v jnl or jnlt ' as argument
pairdisplay: [EX_REQARG] Required Arg list
Refer to the command log(/HORCM/log/horcc_atutapd2.log) for details.
You can ignore the above messages – we’re just verifying the binary at this point
This is done at any point prior to migration window.

1. Run powermt display dev=all /root/powermt.YYYYMMDD. This is to have information for devices (including RAW devices if any) as well as previous mappings for reference

2. From the output file, /root/powermt.YYYYMMDD, you just created, locate a single multi-pathed device from the DMX and note the I/O paths, these are needed at a later step. In the below example these are /dev/sdct and /dev/sdko

Pseudo name=emcpowercq
Symmetrix ID=000190100402
Logical device ID=0E27
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
0 qla2xxx sdct FA 9dA active alive 0 0
1 qla2xxx sdko FA 8dA active alive 0 0


3. Stop databases and applications using the filesystems to be migrated.

4. Unmount the filesystems that are being migrated from the server.

5. Make sure Power Path has been uninstalled before trying to do the multipath setup. The server will crash and go into continuous reboots if Power Path and dm-multipath modules both try to manage the disks. (When this is done it removes the pp entry from /etc/modprobe.conf for PowerPath.)
a. Run “rpm –qa |grep –i emcpower” to verify PowerPath is installed. It should show something like: EMCpower.LINUX.x86_64 0:5.3.0.00.00-185.
b. Run “yum erase EMCpower.LINUX-5.3.0.00.00-185” (or whatever the packages was presented by previous command). This should display something like:
Loaded plugins: downloadonly, rhnplugin, security
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package EMCpower.LINUX.x86_64 0:5.3.0.00.00-185 set to be erased
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================
Package Arch Version Repository
Size
==========================================================
Removing:
EMCpower.LINUX x86_64 5.3.0.00.00-185 installed 19M

Transaction Summary
==========================================================
Remove 1 Package(s)
Reinstall 0 Package(s)
Downgrade 0 Package(s)

Is this ok [y/N]:
c. Type “y” and hit return. This should display something like:
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Erasing : EMCpower.LINUX
1/1

Removed:
EMCpower.LINUX.x86_64 0:5.3.0.00.00-185

6. Double check that all of the disks needed for the migration have been readied to be assigned to host group for this host.

7. Zone in the host to the VSP.

8. Next cd /sys/class/scsi_host.

9. Run the following loop and look for FC adapters, which are usually of the form qla2xxx but not always.
for adapter in $(ls -d host*)
do driver=$(cat ${adapter}/proc_name)
echo $adapter is $driver
done
Output should look something like:
host0 is megaraid_sas
host1 is ata_piix
host2 is ata_piix
host3 is qla2xxx
host4 is qla2xxx

10. Run echo “- - -“ > hostX/scan to force the card to rescan the devices presented with hostX being the instance number of the hosts listed that were FC cards. (Note: There are paces between the dashes after the echo above.)

11. Make sure the VSP is seeing the host now.

12. Have the LUNs being migrated mapped into the host group. (Done by HDS consultant on the VSP array originally – See separate document on how to do this.)

13. Ensure the device-mapper-multipath software is installed using yum list device-mapper-multipath. If not installed run :
a. yum -y install device-mapper-multipath to install it.
b. modprobe dm-multipath
c. service multipathd start
d. multipath -l -v2
e. chkconfig multipathd on

14. Do cp -p /etc/multipath.conf /etc/multipath.conf.YYYYMMDD

15. Run the script to find the names of the disk vendors to be blacklisted in the multipath file to keep them from being seen or scanned by the multipath daemon. Look for the Perc Controller, Megaraid and CD/DVDROM devices.
for each in `cd /sys/block ; ls -d sd*`; do A=`scsi_id -g -u -s /block/$each`; B=` scsi_id -g -a -p0x80 -x -s /block/$each | awk '/VENDOR/ {printf "%s ",$0}; /MODEL/ {print $0}'`; echo $each $A $B; done

16. Under the blacklist section in the multipath.conf do the following.
a. Comment out the line ‘devnode “*”’ as this says to blacklist everything from the multipath daemons.
b. Between the brockets for the blacklist section add device node entries for the Perc Controller(s) and CD/DVDRom. Wildcards are permitted. * Sometimes the CD/DVD-Rom are excluded due to previous udev rules for Symmetrix.
See examples below:
blacklist {
# devnode “*”
device {
vendor “DELL”
product “PERC*”
}
device {
vendor “TSSTcorp”
product “*”
}
} #blacklist section

17. Find the uncommented defaults section and make sure that it has user_friendly_names yes defined. This has the software create a /dev/mapper/mpathX device as well as a World Wide ID (WWID) device name. This is to give a much better name for configuration than the long WWID name. It should look like the following:
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
}

18. Go to the bottom of the file and add a device section for the Hitachi matching one below. These are the recommended settings for a USP and this also suggested for a VSP.
devices {
device {
vendor "HITACHI"
product "OPEN-V.*"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
features "0"
hardware_handler "0"
path_grouping_policy multibus
failback immediate
rr_weight uniform
rr_min_io 1000
path_checker tur
}
}

19. Below the Hitachi device definition in the devices section, add in one for the Clariion matching the one below. This one calls a specific module to handle the active/passive nature of the Clariion disks.

device {
vendor "DGC"
product ".*"
product_blacklist "LUNZ"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_emc /dev/%n"
features "1 queue_if_no_path"
hardware_handler "1 emc"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
no_path_retry 60
rr_min_io 1000
path_checker emc_clariion
}

20. If this host is also on the DMX you can choose to add the following multipath definition for the DMX as well.

device {
vendor "EMC"
product "SYMMETRIX"
getuid_callout "/sbin/scsi_id -g -u -ppre-spc3-83 -s /block/%n"
features "0"
hardware_handler "0"
path_grouping_policy multibus
no_path_retry 6
rr_weight uniform
rr_min_io 1000
path_checker tur
}

21. Manually load the dm-mulitpath kernel module with modprobe dm-multipath.

22. Start the multipathd daemon with service multipathd start. If Power Path is still on the host it will cause the server to panic reboot and then go into endless reboot loops until started in single user mode to uncomment the devnode “*” in the blacklist.

23. Setup the multipath paths with multipath -l -v2. This will print out the paths that are multi-pathed now.

24. Make sure that the sg3_utils are installed on the host by running sg_reset to see if you get a usage message. If not, then run yum install sg3_utils.

25. Now force the server to scan the FC cards by making them do a reset.
Using the I/O paths from above, run sg_reset -h <i/o path> and wait for the command to return. The I/O paths will need to be /dev/sd<i/o path from Power Path>. Each I/O path belongs to a specific FC card – you only need to do one I/O path for each port/card (that is if only 1 port on each of 2 cards you only need to reset one I/O path on each card –not all the I/O paths on each card.
sg_reset -h /dev/sdf
sg_reset -h /dev/sdbt

26. Next run /root/qlogic/ql-dynamic-tgt-lun-disc-2.5/ql-dynamic-tgt-lun-disc.sh to scan for new LUNs. These should show up and be created now. (You can ignore warning about no active I/O so long as you have more than one HBA under some form of multipathing as it will scan them sequentially.)
Please make sure there is no active I/O before running this script
Do you want to continue: (yes/no)? yes
Scanning HOST: host3
....
Scanning HOST: host4
....
Found
3:0:6:0
3:0:6:1
3:0:6:10
3:0:6:11
3:0:6:2
3:0:6:3
3:0:6:4
3:0:6:5
3:0:6:6
3:0:6:7
3:0:6:8
3:0:6:9
4:0:6:0
4:0:6:1
4:0:6:10
4:0:6:11
4:0:6:2
4:0:6:3
4:0:6:4
4:0:6:5
4:0:6:6
4:0:6:7
4:0:6:8
4:0:6:9
27. Scan for the new Hitachi disks using the script:
for each in `cd /sys/block ; ls -d sd*`; do A=`scsi_id -g -u -s /block/$each`; B=` scsi_id -g -a -p0x80 -x -s /block/$each | awk '/VENDOR/ {printf "%s ",$0}; /MODEL/ {print $0}'`; echo $each $A $B; done
28. Run multipath -l -v2 to rescan all devices and build any needed multipath nodes. These should be done when the ql-dynamic-tg-lun-disc.sh script runs to trigger the kernel and udev, but this just ensures everything is done.
29. Run multipath -l | grep OPEN-V | wc -l to get a count of the number of disks coming from the VSP. Verify this count matches the numbers expected.
30. Run pvscan 2>&1 | grep not | wc -l to get a list of disks that pvscan sees but are not in a volume group due to matching those already imported from the DMX/Clariion.
31. Unmount all file systems being virtualized. The volume groups do not have to be exported or deactivated.
32. If RAC move to the steps below step 38.
33. Remove the EMC Arrays being migrated from the zone files for this host.
34. Remove the host from all of the storage groups on the Clariion. When you have removed them from the Clariion you will get multipath messages in the system logs that the disks are already removed.
35. Reboot the host.



For finding disks on a DB host with RAC
1. Run the script
for each in `cd /sys/block ; ls -d sd*`; do A=`scsi_id -g -u -s /block/$each`; B=` scsi_id -g -a -p0x80 -x -s /block/$each | awk '/VENDOR/ {printf "%s ",$0}; /MODEL/ {print $0}'`; echo $each $A $B; done
2. Save off the output to a file or open another terminal window for the next steps. The second column is the WWID for each drive. This will be needed to custom name the device using an alias entry so they can be modified in the udev rules for the raw device.
If Power Path is still installed and running on the host run powermt display dev=all. Make note of the Pseudo name= and I/O paths from the output. These are the names used for the raw ASM devices.

Pseudo name=emcpowera
CLARiiON ID=APM00061205377 [Quest DEV RAC]
Logical device ID=6006016032811800727148591F43DF11 [LUN 0]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP B
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 qla2xxx sdb SP A0 active alive 0 0
1 qla2xxx sdi SP B0 active alive 0 0

Pseudo name=emcpowerb
CLARiiON ID=APM00061205377 [Quest DEV RAC]
Logical device ID=6006016032811800C4B0A1A01F43DF11 [LUN 1]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP B
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 qla2xxx sdc SP A0 active alive 0 0
1 qla2xxx sdj SP B0 active alive 0 0

3. Get the copy of the workbook showing EMC to VSP device mappings from the consultant.
4. Find the EMCPOWER devices in the list that match the ones being migrated and match them to the VSD XX:YY device IDs. The LUNs in the spreadsheet are in hex so they must be converted to decimal to match the Clariion IDs in the GUI for the host.
5. With this data you should be able to create the of the table below except for the matching VSP OS devs. Those will come from scans done when the disks had been presented to the host
emcpower dev iopaths clar lun vsp CU VSP os devs
emcpowera sdb,sdi 0 60:00 sdp
emcpowerb sdc,sdj 1 60:7B sdq
emcpowerc sdd,sdk 2 60:7C sdr
emcpowerd sde,sdl 3 60:7D sds
emcpowere sdf,sdm 4 60:7E sdt
emcpowerf sdg,sdn 5 60:7F sdu
emcpowerg sdh,sdo 6 60:80 sdv

6. Providing the server sees the new disks run ls /dev/sd* | /HORCM/usr/bin/inqraid -fx -CLI  This will show the /dev/sd* name as well as the VSP LDEV number that will match the ones above in the table.
7. Fill in the /dev/sdX entries on the table to match those for the emcpower devices.
8. Run for each in `cd /sys/block ; ls -d sd*`; do A=`scsi_id -g -u -s /block/$each`; B=` scsi_id -g -a -p0x80 -x -s /block/$each | awk '/VENDOR/ {printf "%s ",$0}; /MODEL/ {print $0}'`; echo $each $A $B; done  This will give the data on the WWIDs for the VSP disks now as well. Note them as they will be needed in setting up multipath on the first RAC node after removing Power Path.
9. Create an updated table like the one above but with the VSP unique WWID as well

emcpower dev iopaths clar lun vsp CU VSP os devs WWID
emcpowera sdb,sdi 0 60:00 Sdp 360060e8006d142000000d14200006000
emcpowerb sdc,sdj 1 60:7B Sdq 360060e8006d142000000d1420000607b
emcpowerc sdd,sdk 2 60:7C Sdr 360060e8006d142000000d1420000607c
emcpowerd sde,sdl 3 60:7D Sds 360060e8006d142000000d1420000607d
emcpowere sdf,sdm 4 60:7E Sdt 360060e8006d142000000d1420000607e
emcpowerf sdg,sdn 5 60:7F Sdu 360060e8006d142000000d1420000607f
emcpowerg sdh,sdo 6 60:80 Sdv 360060e8006d142000000d14200006080

10. Using the bindings file do an ls /dev/mapper/mpath* to see the partitions on the various multipath devices.
11. To keep old copies of udev rules that are modified, run mkdir /etc/udev/backup
12. Run cp /etc/udev/rules.d/60-raw.rules /etc/udev/backup/60-raw-rules.YYYYMMDD.
13. The 60-raw.rules file does not work with Device Mapper Multipathed devices but do set the permissions once the raw devces are setup. It also gives a good reference to match up the new disks with the old EMC disks to make sure the raw devices are created properly.
14. Run yum -y install initscripts to get the raw devices startup scripts.
15. We still use the old information in the /etc/udev/rules.d/60-raw.rules but then need to put those entries in the /etc/sysconfig/rawdevices file.
Example
# raw1 = OCR for CRS - emcpowera
ACTION=="add", KERNEL=="emcpowera1", RUN+="/bin/raw /dev/raw/raw1 %N"
# raw2 = Voting Disk for CRS
ACTION=="add", KERNEL=="emcpowera2", RUN+="/bin/raw /dev/raw/raw2 %N"

The emcpowera from the table above matches to HDS WWID 360060e8006d142000000d14200006000. From the bindings file I see that WWID matches to mpath7
mpath7 360060e8006d142000000d14200006000

The way EMC Power Path presents the disks emcpowera1 is the first partition with emcpowera2 as the second. The multipath udev rules have kpartx create the partitions as p1 to pX on a disk. So mpath7, which in this example matches emcpowera, has partitions mpath7p1 and mpath7p2.

Doing an ls on /dev/mapper/mpath7* gives
/dev/mapper/mpath7 /dev/mapper/mpath7p1 /dev/mapper/mpath7p2

So the udev rule would be changed from emcpowera1 and emcpowera2 to /dev/mapper/mpath7p1 and /dev/mapper/mpath7p2.

Here is raw1 and raw2 defined in the /etc/raw devices now for the raw device creation.
/dev/raw/raw1 /dev/mapper/mpath7p1
/dev/raw/raw2 /dev/mapper/mpath7p2

The permission rules should remain in the 60-raw.rules as they will be applied properly.

16. If all raw devs were created properly and have the correct permissions, login to Navisphere, go to the proper disk array and then remove the host from the associated storage group.
17. Reboot the host.


# Requrired for Oracle RAC
# Causes system to reset if kernel hangs for 210 seconds
insmod /lib/modules/2.6.18-8.1.14.el5/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180

# Oracle RAW Device Setup
# Setup here becuase udev does not like RAW with multipath devices
/bin/raw /dev/raw/raw1 /dev/mapper/mpath7p1
/bin/raw /dev/raw/raw2 /dev/mapper/mpath7p2
/bin/raw /dev/raw/raw3 /dev/mapper/mpath8p1
/bin/raw /dev/raw/raw4 /dev/mapper/mpath9p1
/bin/raw /dev/raw/raw5 /dev/mapper/mpath10p1
/bin/raw /dev/raw/raw6 /dev/mapper/mpath11p1
/bin/raw /dev/raw/raw7 /dev/mapper/mpath12p1
/bin/raw /dev/raw/raw8 /dev/mapper/mpath13p1

# Set OCR permissions to root:oinstall and 640
chown root:oinstall /dev/raw/raw1*
chmod 640 /dev/raw/raw1*
# Set the voting disk to oracle:oinstall mode 640
chown oracle:oinstall /dev/raw/raw2*
chmod 640 /dev/raw/raw2*
# ASM DATA and FRA
chown mrdtora:dba /dev/raw/raw[3-8]*
chmod 660 /dev/raw/raw[3-8]*

Last edited by MensaWater; 07-06-2012 at 10:44 AM.
 
Old 07-06-2012, 05:53 PM   #3
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora
Posts: 3,935
Blog Entries: 5

Rep: Reputation: Disabled
Holy smoke, MensaWater. Now that is a thorough follow-up post.

Quote:
Originally Posted by MensaWater
That being said however, you might want to go to multipath to get rid of Powerpath because the latter is a licensed software and EMC on finding out you're using a Hitachi might decide to audit you and try to claim you owe them tons of money for your Powerpath installs.
My thoughts exactly. If you're not using EMC's storage devices any longer, then move to DM-Multipath. The process of doing so is not difficult if you understand how both work. Although having your /boot filesystem on a LUN complicates things.

Time to get reading:
http://docs.redhat.com/docs/en-US/Re...ath/index.html

(And - if I were you - I would not touch your production system until you're satisfied the migration works in a similar test environment.)
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Kdump and Boot From SAN (EMC PowerPath) FragInHell Red Hat 2 01-29-2014 10:47 AM
what is multipath and powerpath bkarthick Linux - Server 1 09-16-2011 11:27 AM
SAN LUN's migrate using LVM2 on redhat linux unahb1 Linux - Server 11 10-01-2010 07:49 AM
[SOLVED] Migrating RHEL4 OS to EMC SAN with PowerPath jbilderb Linux - Enterprise 5 04-19-2010 09:05 AM
Difficulty booting from replicated Boot-from-SAN (CLARiiON) LUN with RHEL4u7 nosoop4u Linux - Software 0 08-14-2009 02:51 PM

LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise

All times are GMT -5. The time now is 04:28 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration