LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   SAN LUN's migrate using LVM2 on redhat linux (https://www.linuxquestions.org/questions/linux-server-73/san-luns-migrate-using-lvm2-on-redhat-linux-816326/)

unahb1 06-25-2010 07:36 AM

SAN LUN's migrate using LVM2 on redhat linux
 
Hi, We are trying to migrate data from one SAN array to another. The LUN's are mounted on RHEL 5 server using LVM2. They are just linear Logical volumes with file system mounted. Now using hostt based mirroring we are trying to migrate the data to another SAN array without any downtime to existing applications/database. Could someone guide me through exact procedures involved to achieve this.

I guess pvmove cannot be used on a live mounted file systems.
Also we would like to mirror the data first and then break the mirrors of old SAN Array.

Steps:--
1) configure new LUN/physical volumes onto existing Volume groups.
2) lvconvert the existing linear LV's to mirrored LV's
3) extend the VG with new PV's and create mirrored LV's on it.
4) After data is synced properly, break the mirrors and remove the LV where the storage is coming from old SAN Array...

could someone confirm / correct me on how do we achieve this.
thanks in advance.

mpapet 06-25-2010 11:57 AM

Good luck!
 
Quote:

Originally Posted by unahb1 (Post 4014670)
using hostt based mirroring we are trying to migrate the data to another SAN array without any downtime to existing applications/database.



You didn't provide enough information. Is this a real SAN? (ex. storage fabric) or just (1 or 2?) external RAID array(s). Is this a single SAN with a new/old virtual disk? Two distinct SAN's? How is the server connected to the undefined SAN(s)?

I have serious doubts you can do this without downtime. I like dd for tasks like this. It's simple and powerful, like a hammer. Just like a hammer, it can cause lots of damage if you don't use it right. Rsync can do the job in many cases with less risk.

Finally, I don't see any way you can roll-back to the old configuration in your plan. Migrations can go horribly wrong, so you need to be able to back out all of your changes.

unahb1 06-28-2010 04:12 AM

Quote:

Originally Posted by mpapet (Post 4014917)
You didn't provide enough information. Is this a real SAN? (ex. storage fabric) or just (1 or 2?) external RAID array(s). Is this a single SAN with a new/old virtual disk? Two distinct SAN's? How is the server connected to the undefined SAN(s)?

I have serious doubts you can do this without downtime. I like dd for tasks like this. It's simple and powerful, like a hammer. Just like a hammer, it can cause lots of damage if you don't use it right. Rsync can do the job in many cases with less risk.

Finally, I don't see any way you can roll-back to the old configuration in your plan. Migrations can go horribly wrong, so you need to be able to back out all of your changes.

Hi, thanks for quick response. Yes it is a real SAN , HP EVA/XP Arrays.Server connected to SAN's using HBA fibre channel Cards
If disks are being continuously used, how can dd do the job ? and rsync is only used for syncing remote data.... how can that fit into LVM ?

mesiol 06-28-2010 08:44 AM

Hi,

rsync by itself can sync data on local systems. Various people and me myself use it for backup. Disks continuously in use will be the biggest problem. It means there is every time some change. What you are looking for is some kind of block level mirroring between your SAN. For this we use Symantec Storagefoundation (Veritas VxVM), but the cost will be the problem i think.

Does any one know if there is a block based mirroring software which can do the same as VxVM?

mpapet 06-28-2010 12:18 PM

Quote:

Originally Posted by unahb1 (Post 4016985)
Hi, thanks for quick response. Yes it is a real SAN , HP EVA/XP Arrays.Server connected to SAN's using HBA fibre channel Cards
If disks are being continuously used, how can dd do the job ? and rsync is only used for syncing remote data.... how can that fit into LVM ?

1. You are ignoring the fact this probably can't be done without downtime. Maybe 30 minutes if you can practice.
2. How will you handle a migration failure?
3. Our version of an HP EVA has the 'create snapclone' feature to move the disk images.
4. Our version of HP EVA has a data replication feature.
5. Why are you mirroring AND using logical volumes at the host level? Both are needless complexity. The EVA is the better tool for the job!

It's unclear to me if you are moving from an old EVA to a new EVA or if you want new virtual disks on the old EVA. Please explain.

It sounds like you are in way over your head. Both 3 and 4 will make the job easier, but downtime looks to be inevitable. Someone please prove me wrong because I'd like to know how to eliminate downtime.

unahb1 06-29-2010 07:43 AM

Quote:

Originally Posted by mpapet (Post 4017352)
1. You are ignoring the fact this probably can't be done without downtime. Maybe 30 minutes if you can practice.
2. How will you handle a migration failure?
3. Our version of an HP EVA has the 'create snapclone' feature to move the disk images.
4. Our version of HP EVA has a data replication feature.
5. Why are you mirroring AND using logical volumes at the host level? Both are needless complexity. The EVA is the better tool for the job!

It's unclear to me if you are moving from an old EVA to a new EVA or if you want new virtual disks on the old EVA. Please explain.

It sounds like you are in way over your head. Both 3 and 4 will make the job easier, but downtime looks to be inevitable. Someone please prove me wrong because I'd like to know how to eliminate downtime.

Thanks for the response. Oh its getting tough now. Instead of using SAN based migration tools, our company decided to go with host based mirroring ex using LVM - The reason given is to avoid downtime. yes we are moving data from old eva to a new one. I am aware LVM/mirroring is complex compared to using HP SAN tools but helpless. Can similar scenario be achieved in Solaris using either SVM or VxVM ?

mpapet 06-29-2010 11:45 AM

There be Monsters!!
 
Quote:

Originally Posted by unahb1 (Post 4018151)
=Instead of using SAN based migration tools, our company decided to go with host based mirroring ex using LVM - The reason given is to avoid downtime. yes we are moving data from old eva to a new one.

I don't see how you can make it work without downtime. IMHO, you are on the wrong end of a migration that will go badly. There's going to be way more downtime when the applications and database blow up due to FUBAR'd disk writes. If you can't make that message stick, then I'd start looking for another job.


That said, how did the servers get connected to the new SAN without downtime changing the fibre channel hardware?

tristanz 07-02-2010 05:34 AM

unahb1,

The scenario you describe should work. I'm researching this topic because I need to do a similar migration, existing LVM volumes with single luns in it and a filesystem on it, my plan is also to add a pv with the new lun, convert to mirror, break mirror and remove old pv's. Should take no downtime provided the new luns are working at OS level.

tristanz 07-02-2010 05:35 AM

Oh, and you can find some helpful details in this thread:

http://forums11.itrc.hp.com/service/...readId=1333563

unahb1 07-16-2010 11:13 AM

Quote:

Originally Posted by tristanz (Post 4021533)
unahb1,

The scenario you describe should work. I'm researching this topic because I need to do a similar migration, existing LVM volumes with single luns in it and a filesystem on it, my plan is also to add a pv with the new lun, convert to mirror, break mirror and remove old pv's. Should take no downtime provided the new luns are working at OS level.

thanks tristanz, yes the LUN's are working/controlled at OS level. Using software RAID, ie. LVM2.

tristanz 08-06-2010 08:57 AM

I've written down the commands I've used to do my lun migration, for your information. The link in previous post is for hp-ux not linux.

1. Assume one mounted fs on a plain lvm volume with a single lun:
/dev/mapper/ghost-ghost 92G 74G 14G 85% /mnt/ghost

2. Get new lun working at linux level, for example using dm-multipath:
mpath13 (3600c0ff000d8230d16de5b4c01000000) dm-24 HP,MSA2312sa
[size=93G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 0:0:0:17 sdu 65:64 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:17 sdab 65:176 [active][undef]

3. Create new physical volume:
[root@node11 ~]# pvcreate /dev/mapper/mpath13
Physical volume "/dev/mapper/mpath13" successfully created

4. Extend original volume group:
[root@node11 ~]# vgextend ghost /dev/mapper/mpath13
Volume group "ghost" successfully extended

5. Convert logical volume to a mirror with 2 legs:
[root@node11 ~]# lvconvert -m 1 ghost/ghost --corelog
ghost/ghost: Converted: 12.2%
ghost/ghost: Converted: 24.4%
ghost/ghost: Converted: 36.2%
ghost/ghost: Converted: 48.3%
ghost/ghost: Converted: 60.3%
ghost/ghost: Converted: 72.4%
ghost/ghost: Converted: 84.6%
ghost/ghost: Converted: 96.7%
ghost/ghost: Converted: 100.0%
Logical volume ghost converted.
The time it takes depends on disk speed, this 92GB took about 5 minutes on my system. Remember that it'll take some performance from disks and server, so don't do it on peak load hours.

[root@node11 ~]# vgdisplay ghost -v
Using volume group(s) on command line
Finding volume group "ghost"
--- Volume group ---
VG Name ghost
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 21
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 186.25 GB
PE Size 4.00 MB
Total PE 47681
Alloc PE / Size 47616 / 186.00 GB
Free PE / Size 65 / 260.00 MB
VG UUID VLzsFf-vlqq-TmSR-2P53-izm4-dVH1-pcNeii

--- Logical volume ---
LV Name /dev/ghost/ghost
VG Name ghost
LV UUID 9Q9PrO-TBmP-1prT-8PSV-tFxT-3GR0-FE94Q9
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:16

--- Logical volume ---
LV Name /dev/ghost/ghost_mimage_0
VG Name ghost
LV UUID w80F3L-JbHv-A5Dt-50dK-8N0k-h3IE-P0GzJL
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:25

--- Logical volume ---
LV Name /dev/ghost/ghost_mimage_1
VG Name ghost
LV UUID 5gOfn4-bCNB-tpG3-0gue-hhr9-k63p-E0Jb9U
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:26

--- Physical volumes ---
PV Name /dev/dm-13
PV UUID R5henD-M3pW-dNaH-P4Xs-WBjW-fTWo-52nvTz
PV Status allocatable
Total PE / Free PE 23840 / 32

PV Name /dev/dm-24
PV UUID 3pPxpD-DX4U-ay6t-Gf2B-RoKP-Wfoz-rN2Usj
PV Status allocatable
Total PE / Free PE 23841 / 33

Voila.

6. Convert lv to unmirrored, removing the old pv:
[root@node11 ~]# lvconvert -m 0 ghost/ghost /dev/dm-13
Logical volume ghost converted.

7. Remove old pv from vg:
[root@node11 ~]# vgreduce ghost /dev/dm-13
Removed "/dev/dm-13" from volume group "ghost"

Done!
If you check the vg now with vgdisplay you'll see it runs nicely on the new pv.
Tested on RHEL5.4 using HP SAS hardware.

unahb1 10-01-2010 07:49 AM

all, just thought of giving you an update. This worked like a charm on all servers, except servers using Oracle ASM /RAC cluster where it takes control of the hard disks/LUNs and only Oracle volume manager controls them. In such scenario Linux LVM is of no use and any attempt will corrupt those disks.

Thanks everyone for your support.


All times are GMT -5. The time now is 03:33 PM.