LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Blogs > MensaWater
User Name
Password

Notices


Rate this Entry

Migrating SAN storage from RHEL 5.9 to RHEL 6.8

Posted 07-25-2016 at 01:12 PM by MensaWater

I've had to migrate multiple Oracle instances from RHEL 5.9 to RHEL 6.8 because we're replacing an old server. This is Oracle 11gR2 using ext4 filesystems (not RAC/GRID, not using ASM).

Just wanted to share the steps. Presumably this would work for most (Redhat derived at least including CentOS and Fedora) Linux flavors so I only specify the source and target distro/versions for completeness.

ENVIRONMENT:
We use a Hitachi VSP disk array and with full fiber SAN. We have installed the Hitachi Horcm software on each of our Linux (SAN Attached servers).

We also use Linux multipath for dual attachments from the SAN.

Our servers use Qlogic fiber HBAs.

In our multipathd we use specific defined names based on the UUID of the Hitachi devices for most filesystems but this also work for standard "friendly" mpath names. It wasn't tested with default (non-"friendly") names.

We setup everything in Logical Volume Manager (LVM). For our Oracle instances we have for each Oracle SID (instance):
Three VGs:
a) VG<SID>_SI = The VG with the filesystem(s) that has the database dbfs and indx files.
b) VG<SID>_REDO = The VG with filesystems that have the Oracle archive log files in them.
c) VG<SID>_ORA = The VG with the filesystem that has the Oracle binaries/application files.
The below assumes separate VGs of this nature. Some of the steps would be valid even without VGs or if you had multiples SIDs sharing VGs but of course you'd have to plan the migration based on such a setup.

In our environment we were doing Test/Dev instances.

The migration preserves data but as always if you are doing Production or any other critical data migration you'd want to have a recent backup "just in case" something doesn't go as expected.

I. ON BOTH SOURCE RHEL5.9 & TARGET RHEL 6.8 SERVERS:
1) Save current state information so you'll know what it looked like before you started by running:
a) vgs >vgs.YYYYMMDD
b) lvs >lvs.YYYYMMDD
c) pvs >pvs.YYYYMMDD
d) df -hP >df_hP.YYYYMMDD
e) lsscsi >lsscsi.YYYYMMDD
f) ls /dev/sd* |inqraid -CLI -fxng >inqraid_CLI_fxng.YYYYMMDD
(Note: inqraid is a Hitachi Horcm Command - skip this and/or use an
alternative if desired.)
g) multipath -l -v2 >multipath_l_v2.YYYYMMDD
2) Save date stamped copies of /etc/fstab
3) Save date stamped copies of /etc/multipath.conf
---
II. ON SOURCE RHEL5.9 SERVER
1) Run "lsof" against each of the filesystems found in the saved df output for <SID> to see if any processes are using them.
None should be if you've already stopped the instance. You'd want to address any found (e.g. contact your DBA if <SID> is still up and/or any users that might be busying out the filesystems.
2) On source comment out the <SID> filesystems in /etc/fstab.
3) Run "umount" on each of the <SID> filesystems.
4) Run "vgchange -an" to deactivate each of the <SID> LVM Volume Groups (VGs).
5) Run "vgexport" on each of the <SID> VGs to export them.
6) Run loop to save a list of all the sd* devices associated with the multipath devices for <SID>:
Code:
for hld in $(grep VG<SID>* pvs.YYYYMMDD |awk '{print $1}'|awk -F/ '{print $NF}')
do multipath -l $hld |egrep "[45]:0:0:" |awk '{print $3}'
done ><sid>_sd_devs.YYYYMMDD
NOTE: The above egrep is based on the Qlogic paths starting with 4:0:0 for first port and 5:0:0 for second port - this would be different on other systems - you can determine which hosts you have by looking at the lsscsi output and/or the /sys/class/scsi_host directories.
7) Run loop to remove each of the <SID> related multipath devices (e.g. hldev* or mpath*):
Code:
for hld in $(grep VG<SID> pvs.YYYYMMDD |awk '{print $1}'|awk -F/ '{print $NF}')
do multipath -f $hld
done
8) Run loop to flush buffers for each of the <SID> related sd* devices:
Code:
for dev in $(cat <sid>_sd_devs.YYYYMMDD)
do echo Flushing $dev
blockdev --flushbufs /dev/$dev
done
9) Run loop to delete each of the <SID> related sd* devices:
Code:
for dev in $(cat <sid>_sd_devs.20160725)
do echo Removing $dev
echo 1 >/sys/block/${dev}/device/delete
done
---
III. IN HITACHI STORAGE NAVIGATOR IN BROWSER
1) Select "Logical Devices" in left pane
2) In Right Pane turn on filter for LDEVs shown and filter by:
LDEV Name contains CONV
This should give list of 49 LDEVs.
NOTE: 49 is what we have for each of our instances including all 3 VGs. The count is likely different for your environment.
3) Select/highlight all 49 LDEVs.
4) At bottom of page click "Add LUN Paths" button.
Add LUN Paths Window will appear - do the following:
a) In Select LDEVs step verify the 49 LDEVs are already in the right pane. If so click "Next" button as there is no need to select additional LDEVs.
b) In Select Host Groups step select/highlight the ones for the target host.
NOTE: In our environment we have two fiber connections to the SAN for each server so we have 2 host groups (one for each connection). This might be more or less depending on your configuration.
c) Click on "Add" button to move the selected host groups from left pane to right pane.
d) Click on "Next" button.
e) In View/Change LUN Paths step verify it has 49 rows for the LDEVs and columns for each of the selected (target server) host groups. If so click "Finish".
f) In Confirm step it should show same detail. Click "Apply" to start the task.
5) Once the above task has completed return to Logical Devices and set same filter as before. Verify all 49 devices now show each has paths for both the source server and the target server now. (4 in our environment.)
6) Reselect the 49 LDEVs
7) Click the "More Actions" button below right pane and select "Delete LUN Paths" from the menu that appears.
8) Delete LUN Paths window will appear showing 196 rows (4 paths x 49 LDEVs). Do the following:
NOTE: The below is counter-intuitive. One must tell it to "delete" paths from the "remove" (i.e. double negative - we're actually telling it NOT to remove the LUN paths we say to delete from the remove).
a) Turn on filter and filter by:
Host Group name contains <TARGET server host groups>
b) That will change display to 98 rows (2 paths X 49 LDEVs)
c) Select all 98 rows.
d) Click the "Remove from Delete process" button.
e) On Pop Up Warning Box that asks:
"Are you sure you want to remove the selected rows(s)?" click the "OK" button.
f) Display will no longer show the 98 rows for <TARGET server host groups>. Turn off the filter and it will show 98 rows (2 paths x 49 LDEVs) for <SOURCE server host groups>.
g) Click "Finish" button to start the delete task.
9) Once task is complete return to Logical Devices and verify all devices now show they have only 2 paths (for the TARGET server host groups)
---
IV. ON SOURCE RHEL5.9
1) Run "rescan-scsi-bus.sh 2>&1 |tee -a rescan_scsi_bus_after.20160725" so the system will update to recognize it no longer has access to the 49 LDEVs.
2) Save new state information:
a) vgs >vgs_after.YYYYMMDD
b) lvs >lvs_after.YYYYMMDD
c) pvs >pvs_after.YYYYMMDD
d) df -hP >df_hP_after.YYYYMMDD
e) lsscsi >lsscsi_after.YYYYMMDD
f) ls /dev/sd* |inqraid -CLI -fxng >inqraid_CLI_fxng_after.YYYYMMDD
g) multipath -l -v2 >multipath_l_v2_after.YYYYMMDD
3) Compare the new state information to that saved previously verifying the <SID> information from earlier state save original is no longer in the new.
---
V. ON TARGET RHEL 6.8
1) Modify /etc/multipath.conf to include entries for the <SID> LDEVs so they use names "hldev<ID>" where ID is the CU and number component of the LDEV ID from Hitachi.
NOTE: This is our customization to avoid "friendly" mpath## names and rather use names with "hldev" int them. Skip this is you don't have similar customziations.
2) Run "service multipathd restart" to read in the revised configuration.
3) Run "rescan-scsi-bus.sh 2>&1 |tee -a rescan_scsi_bus_after.YYYYMMDD" so the system will update to recognize it now ha 49 LDEVs. (This should add 98 entries 2 Qlogic paths x 49 LDEVs).
4) Review /var/log/messages - it should show new sd* devices and new multipath (hldev* or mpath*) devices were created.
5) Save new state information:
a) lsscsi >lsscsi_after.20160725
b) ls /dev/sd* |inqraid -CLI -fxng >inqraid_CLI_fxng_after.YYYYMMDD
c) multipath -l -v2 >multipath_l_v2_after.YYYYMMDD
6) Compare new state information with original:
a) lsscsi should now included 98 new sd* devices from Hitachi.
b) inqraid should now include the same 98 devices as step lsscsi and include the LDEV ID and LDEV Names (if any - we assign LDEV names based on the VG we put the LDEVs in) previously seen on source server.
c) multipath should now show 49 new hldev* or mpath* names each with two component sd* devices.
7) Run "pvscan" so the new devices are scanned to find the LVM information previously stored on them.
8) Run "vgimport" on target server on each of the VGs previously vgexported from source server.
9) Run "vgchange -ay" on each of the VGs just imported to activate them on atltst02.
10) Verify the /dev/mapper/VG<SID>* filesystem devices now exist on target.
11) Copy the entries for <SID> filesystems from source system into /etc/fstab on target system. Uncomment them on target.
12) Run "mkdir" for each of the <SID> mount point directories for filesystems being migrated.
13) Run "mount -a" to mount the new devices.
14) Save additional new state information by running:
a) vgs >vgs_after_CONV.YYYYMMDD
b) lvs >lvs_after_CONV.YYYYMMDD
c) pvs >pvs_after_CONV.YYYYMMDD
d) df -hP >df_hP_after_CONV.YYYYMMDD
15) Verify <SID> mounts shown in the new df output on target server match what was shown previously in source server's original state for the mounts.
Posted in Uncategorized
Views 2783 Comments 0
« Prev     Main     Next »
Total Comments 0

Comments

 

  



All times are GMT -5. The time now is 12:17 PM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration