LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 06-25-2010, 07:36 AM   #1
unahb1
LQ Newbie
 
Registered: Aug 2004
Posts: 6

Rep: Reputation: 0
SAN LUN's migrate using LVM2 on redhat linux


Hi, We are trying to migrate data from one SAN array to another. The LUN's are mounted on RHEL 5 server using LVM2. They are just linear Logical volumes with file system mounted. Now using hostt based mirroring we are trying to migrate the data to another SAN array without any downtime to existing applications/database. Could someone guide me through exact procedures involved to achieve this.

I guess pvmove cannot be used on a live mounted file systems.
Also we would like to mirror the data first and then break the mirrors of old SAN Array.

Steps:--
1) configure new LUN/physical volumes onto existing Volume groups.
2) lvconvert the existing linear LV's to mirrored LV's
3) extend the VG with new PV's and create mirrored LV's on it.
4) After data is synced properly, break the mirrors and remove the LV where the storage is coming from old SAN Array...

could someone confirm / correct me on how do we achieve this.
thanks in advance.
 
Old 06-25-2010, 11:57 AM   #2
mpapet
Member
 
Registered: Nov 2003
Location: Los Angeles
Distribution: debian
Posts: 548

Rep: Reputation: 72
Good luck!

Quote:
Originally Posted by unahb1 View Post
using hostt based mirroring we are trying to migrate the data to another SAN array without any downtime to existing applications/database.


You didn't provide enough information. Is this a real SAN? (ex. storage fabric) or just (1 or 2?) external RAID array(s). Is this a single SAN with a new/old virtual disk? Two distinct SAN's? How is the server connected to the undefined SAN(s)?

I have serious doubts you can do this without downtime. I like dd for tasks like this. It's simple and powerful, like a hammer. Just like a hammer, it can cause lots of damage if you don't use it right. Rsync can do the job in many cases with less risk.

Finally, I don't see any way you can roll-back to the old configuration in your plan. Migrations can go horribly wrong, so you need to be able to back out all of your changes.

Last edited by mpapet; 06-25-2010 at 12:08 PM.
 
Old 06-28-2010, 04:12 AM   #3
unahb1
LQ Newbie
 
Registered: Aug 2004
Posts: 6

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by mpapet View Post
You didn't provide enough information. Is this a real SAN? (ex. storage fabric) or just (1 or 2?) external RAID array(s). Is this a single SAN with a new/old virtual disk? Two distinct SAN's? How is the server connected to the undefined SAN(s)?

I have serious doubts you can do this without downtime. I like dd for tasks like this. It's simple and powerful, like a hammer. Just like a hammer, it can cause lots of damage if you don't use it right. Rsync can do the job in many cases with less risk.

Finally, I don't see any way you can roll-back to the old configuration in your plan. Migrations can go horribly wrong, so you need to be able to back out all of your changes.
Hi, thanks for quick response. Yes it is a real SAN , HP EVA/XP Arrays.Server connected to SAN's using HBA fibre channel Cards
If disks are being continuously used, how can dd do the job ? and rsync is only used for syncing remote data.... how can that fit into LVM ?
 
Old 06-28-2010, 08:44 AM   #4
mesiol
Member
 
Registered: Nov 2008
Location: Lower Saxony, Germany
Distribution: CentOS, RHEL, Solaris 10, AIX, HP-UX
Posts: 731

Rep: Reputation: 137Reputation: 137
Hi,

rsync by itself can sync data on local systems. Various people and me myself use it for backup. Disks continuously in use will be the biggest problem. It means there is every time some change. What you are looking for is some kind of block level mirroring between your SAN. For this we use Symantec Storagefoundation (Veritas VxVM), but the cost will be the problem i think.

Does any one know if there is a block based mirroring software which can do the same as VxVM?
 
Old 06-28-2010, 12:18 PM   #5
mpapet
Member
 
Registered: Nov 2003
Location: Los Angeles
Distribution: debian
Posts: 548

Rep: Reputation: 72
Quote:
Originally Posted by unahb1 View Post
Hi, thanks for quick response. Yes it is a real SAN , HP EVA/XP Arrays.Server connected to SAN's using HBA fibre channel Cards
If disks are being continuously used, how can dd do the job ? and rsync is only used for syncing remote data.... how can that fit into LVM ?
1. You are ignoring the fact this probably can't be done without downtime. Maybe 30 minutes if you can practice.
2. How will you handle a migration failure?
3. Our version of an HP EVA has the 'create snapclone' feature to move the disk images.
4. Our version of HP EVA has a data replication feature.
5. Why are you mirroring AND using logical volumes at the host level? Both are needless complexity. The EVA is the better tool for the job!

It's unclear to me if you are moving from an old EVA to a new EVA or if you want new virtual disks on the old EVA. Please explain.

It sounds like you are in way over your head. Both 3 and 4 will make the job easier, but downtime looks to be inevitable. Someone please prove me wrong because I'd like to know how to eliminate downtime.

Last edited by mpapet; 06-28-2010 at 12:24 PM.
 
Old 06-29-2010, 07:43 AM   #6
unahb1
LQ Newbie
 
Registered: Aug 2004
Posts: 6

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by mpapet View Post
1. You are ignoring the fact this probably can't be done without downtime. Maybe 30 minutes if you can practice.
2. How will you handle a migration failure?
3. Our version of an HP EVA has the 'create snapclone' feature to move the disk images.
4. Our version of HP EVA has a data replication feature.
5. Why are you mirroring AND using logical volumes at the host level? Both are needless complexity. The EVA is the better tool for the job!

It's unclear to me if you are moving from an old EVA to a new EVA or if you want new virtual disks on the old EVA. Please explain.

It sounds like you are in way over your head. Both 3 and 4 will make the job easier, but downtime looks to be inevitable. Someone please prove me wrong because I'd like to know how to eliminate downtime.
Thanks for the response. Oh its getting tough now. Instead of using SAN based migration tools, our company decided to go with host based mirroring ex using LVM - The reason given is to avoid downtime. yes we are moving data from old eva to a new one. I am aware LVM/mirroring is complex compared to using HP SAN tools but helpless. Can similar scenario be achieved in Solaris using either SVM or VxVM ?
 
Old 06-29-2010, 11:45 AM   #7
mpapet
Member
 
Registered: Nov 2003
Location: Los Angeles
Distribution: debian
Posts: 548

Rep: Reputation: 72
There be Monsters!!

Quote:
Originally Posted by unahb1 View Post
=Instead of using SAN based migration tools, our company decided to go with host based mirroring ex using LVM - The reason given is to avoid downtime. yes we are moving data from old eva to a new one.
I don't see how you can make it work without downtime. IMHO, you are on the wrong end of a migration that will go badly. There's going to be way more downtime when the applications and database blow up due to FUBAR'd disk writes. If you can't make that message stick, then I'd start looking for another job.


That said, how did the servers get connected to the new SAN without downtime changing the fibre channel hardware?
 
Old 07-02-2010, 05:34 AM   #8
tristanz
LQ Newbie
 
Registered: Jul 2010
Posts: 3

Rep: Reputation: 0
unahb1,

The scenario you describe should work. I'm researching this topic because I need to do a similar migration, existing LVM volumes with single luns in it and a filesystem on it, my plan is also to add a pv with the new lun, convert to mirror, break mirror and remove old pv's. Should take no downtime provided the new luns are working at OS level.
 
Old 07-02-2010, 05:35 AM   #9
tristanz
LQ Newbie
 
Registered: Jul 2010
Posts: 3

Rep: Reputation: 0
Oh, and you can find some helpful details in this thread:

http://forums11.itrc.hp.com/service/...readId=1333563
 
Old 07-16-2010, 11:13 AM   #10
unahb1
LQ Newbie
 
Registered: Aug 2004
Posts: 6

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by tristanz View Post
unahb1,

The scenario you describe should work. I'm researching this topic because I need to do a similar migration, existing LVM volumes with single luns in it and a filesystem on it, my plan is also to add a pv with the new lun, convert to mirror, break mirror and remove old pv's. Should take no downtime provided the new luns are working at OS level.
thanks tristanz, yes the LUN's are working/controlled at OS level. Using software RAID, ie. LVM2.
 
Old 08-06-2010, 08:57 AM   #11
tristanz
LQ Newbie
 
Registered: Jul 2010
Posts: 3

Rep: Reputation: 0
I've written down the commands I've used to do my lun migration, for your information. The link in previous post is for hp-ux not linux.

1. Assume one mounted fs on a plain lvm volume with a single lun:
/dev/mapper/ghost-ghost 92G 74G 14G 85% /mnt/ghost

2. Get new lun working at linux level, for example using dm-multipath:
mpath13 (3600c0ff000d8230d16de5b4c01000000) dm-24 HP,MSA2312sa
[size=93G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 0:0:0:17 sdu 65:64 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:17 sdab 65:176 [active][undef]

3. Create new physical volume:
[root@node11 ~]# pvcreate /dev/mapper/mpath13
Physical volume "/dev/mapper/mpath13" successfully created

4. Extend original volume group:
[root@node11 ~]# vgextend ghost /dev/mapper/mpath13
Volume group "ghost" successfully extended

5. Convert logical volume to a mirror with 2 legs:
[root@node11 ~]# lvconvert -m 1 ghost/ghost --corelog
ghost/ghost: Converted: 12.2%
ghost/ghost: Converted: 24.4%
ghost/ghost: Converted: 36.2%
ghost/ghost: Converted: 48.3%
ghost/ghost: Converted: 60.3%
ghost/ghost: Converted: 72.4%
ghost/ghost: Converted: 84.6%
ghost/ghost: Converted: 96.7%
ghost/ghost: Converted: 100.0%
Logical volume ghost converted.
The time it takes depends on disk speed, this 92GB took about 5 minutes on my system. Remember that it'll take some performance from disks and server, so don't do it on peak load hours.

[root@node11 ~]# vgdisplay ghost -v
Using volume group(s) on command line
Finding volume group "ghost"
--- Volume group ---
VG Name ghost
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 21
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 186.25 GB
PE Size 4.00 MB
Total PE 47681
Alloc PE / Size 47616 / 186.00 GB
Free PE / Size 65 / 260.00 MB
VG UUID VLzsFf-vlqq-TmSR-2P53-izm4-dVH1-pcNeii

--- Logical volume ---
LV Name /dev/ghost/ghost
VG Name ghost
LV UUID 9Q9PrO-TBmP-1prT-8PSV-tFxT-3GR0-FE94Q9
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:16

--- Logical volume ---
LV Name /dev/ghost/ghost_mimage_0
VG Name ghost
LV UUID w80F3L-JbHv-A5Dt-50dK-8N0k-h3IE-P0GzJL
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:25

--- Logical volume ---
LV Name /dev/ghost/ghost_mimage_1
VG Name ghost
LV UUID 5gOfn4-bCNB-tpG3-0gue-hhr9-k63p-E0Jb9U
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:26

--- Physical volumes ---
PV Name /dev/dm-13
PV UUID R5henD-M3pW-dNaH-P4Xs-WBjW-fTWo-52nvTz
PV Status allocatable
Total PE / Free PE 23840 / 32

PV Name /dev/dm-24
PV UUID 3pPxpD-DX4U-ay6t-Gf2B-RoKP-Wfoz-rN2Usj
PV Status allocatable
Total PE / Free PE 23841 / 33

Voila.

6. Convert lv to unmirrored, removing the old pv:
[root@node11 ~]# lvconvert -m 0 ghost/ghost /dev/dm-13
Logical volume ghost converted.

7. Remove old pv from vg:
[root@node11 ~]# vgreduce ghost /dev/dm-13
Removed "/dev/dm-13" from volume group "ghost"

Done!
If you check the vg now with vgdisplay you'll see it runs nicely on the new pv.
Tested on RHEL5.4 using HP SAS hardware.
 
Old 10-01-2010, 07:49 AM   #12
unahb1
LQ Newbie
 
Registered: Aug 2004
Posts: 6

Original Poster
Rep: Reputation: 0
all, just thought of giving you an update. This worked like a charm on all servers, except servers using Oracle ASM /RAC cluster where it takes control of the hard disks/LUNs and only Oracle volume manager controls them. In such scenario Linux LVM is of no use and any attempt will corrupt those disks.

Thanks everyone for your support.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
NetApps SAN Configuration on Redhat Linux version 4 syed.shabuddin Linux - Enterprise 1 02-04-2010 12:35 PM
Use LVM for SAN LUN's or not? zQUEz Linux - Server 4 11-24-2009 08:57 AM
what is the procedure to replace a failed drive on Redhat running LVM2 balajmohan Linux - Server 5 11-20-2009 09:07 AM
Howto clone/migrate a volume in the SAN santisaez Linux - Server 1 01-24-2008 04:46 AM
Fedora (Should I migrate to it from Redhat 9) poochdog Fedora 2 07-11-2004 07:35 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 09:40 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration