SAN LUN's migrate using LVM2 on redhat linux
Hi, We are trying to migrate data from one SAN array to another. The LUN's are mounted on RHEL 5 server using LVM2. They are just linear Logical volumes with file system mounted. Now using hostt based mirroring we are trying to migrate the data to another SAN array without any downtime to existing applications/database. Could someone guide me through exact procedures involved to achieve this.
I guess pvmove cannot be used on a live mounted file systems. Also we would like to mirror the data first and then break the mirrors of old SAN Array. Steps:-- 1) configure new LUN/physical volumes onto existing Volume groups. 2) lvconvert the existing linear LV's to mirrored LV's 3) extend the VG with new PV's and create mirrored LV's on it. 4) After data is synced properly, break the mirrors and remove the LV where the storage is coming from old SAN Array... could someone confirm / correct me on how do we achieve this. thanks in advance. |
Good luck!
Quote:
You didn't provide enough information. Is this a real SAN? (ex. storage fabric) or just (1 or 2?) external RAID array(s). Is this a single SAN with a new/old virtual disk? Two distinct SAN's? How is the server connected to the undefined SAN(s)? I have serious doubts you can do this without downtime. I like dd for tasks like this. It's simple and powerful, like a hammer. Just like a hammer, it can cause lots of damage if you don't use it right. Rsync can do the job in many cases with less risk. Finally, I don't see any way you can roll-back to the old configuration in your plan. Migrations can go horribly wrong, so you need to be able to back out all of your changes. |
Quote:
If disks are being continuously used, how can dd do the job ? and rsync is only used for syncing remote data.... how can that fit into LVM ? |
Hi,
rsync by itself can sync data on local systems. Various people and me myself use it for backup. Disks continuously in use will be the biggest problem. It means there is every time some change. What you are looking for is some kind of block level mirroring between your SAN. For this we use Symantec Storagefoundation (Veritas VxVM), but the cost will be the problem i think. Does any one know if there is a block based mirroring software which can do the same as VxVM? |
Quote:
2. How will you handle a migration failure? 3. Our version of an HP EVA has the 'create snapclone' feature to move the disk images. 4. Our version of HP EVA has a data replication feature. 5. Why are you mirroring AND using logical volumes at the host level? Both are needless complexity. The EVA is the better tool for the job! It's unclear to me if you are moving from an old EVA to a new EVA or if you want new virtual disks on the old EVA. Please explain. It sounds like you are in way over your head. Both 3 and 4 will make the job easier, but downtime looks to be inevitable. Someone please prove me wrong because I'd like to know how to eliminate downtime. |
Quote:
|
There be Monsters!!
Quote:
That said, how did the servers get connected to the new SAN without downtime changing the fibre channel hardware? |
unahb1,
The scenario you describe should work. I'm researching this topic because I need to do a similar migration, existing LVM volumes with single luns in it and a filesystem on it, my plan is also to add a pv with the new lun, convert to mirror, break mirror and remove old pv's. Should take no downtime provided the new luns are working at OS level. |
Oh, and you can find some helpful details in this thread:
http://forums11.itrc.hp.com/service/...readId=1333563 |
Quote:
|
I've written down the commands I've used to do my lun migration, for your information. The link in previous post is for hp-ux not linux.
1. Assume one mounted fs on a plain lvm volume with a single lun: /dev/mapper/ghost-ghost 92G 74G 14G 85% /mnt/ghost 2. Get new lun working at linux level, for example using dm-multipath: mpath13 (3600c0ff000d8230d16de5b4c01000000) dm-24 HP,MSA2312sa [size=93G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 0:0:0:17 sdu 65:64 [active][undef] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:17 sdab 65:176 [active][undef] 3. Create new physical volume: [root@node11 ~]# pvcreate /dev/mapper/mpath13 Physical volume "/dev/mapper/mpath13" successfully created 4. Extend original volume group: [root@node11 ~]# vgextend ghost /dev/mapper/mpath13 Volume group "ghost" successfully extended 5. Convert logical volume to a mirror with 2 legs: [root@node11 ~]# lvconvert -m 1 ghost/ghost --corelog ghost/ghost: Converted: 12.2% ghost/ghost: Converted: 24.4% ghost/ghost: Converted: 36.2% ghost/ghost: Converted: 48.3% ghost/ghost: Converted: 60.3% ghost/ghost: Converted: 72.4% ghost/ghost: Converted: 84.6% ghost/ghost: Converted: 96.7% ghost/ghost: Converted: 100.0% Logical volume ghost converted. The time it takes depends on disk speed, this 92GB took about 5 minutes on my system. Remember that it'll take some performance from disks and server, so don't do it on peak load hours. [root@node11 ~]# vgdisplay ghost -v Using volume group(s) on command line Finding volume group "ghost" --- Volume group --- VG Name ghost System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 21 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 186.25 GB PE Size 4.00 MB Total PE 47681 Alloc PE / Size 47616 / 186.00 GB Free PE / Size 65 / 260.00 MB VG UUID VLzsFf-vlqq-TmSR-2P53-izm4-dVH1-pcNeii --- Logical volume --- LV Name /dev/ghost/ghost VG Name ghost LV UUID 9Q9PrO-TBmP-1prT-8PSV-tFxT-3GR0-FE94Q9 LV Write Access read/write LV Status available # open 1 LV Size 93.00 GB Current LE 23808 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:16 --- Logical volume --- LV Name /dev/ghost/ghost_mimage_0 VG Name ghost LV UUID w80F3L-JbHv-A5Dt-50dK-8N0k-h3IE-P0GzJL LV Write Access read/write LV Status available # open 1 LV Size 93.00 GB Current LE 23808 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:25 --- Logical volume --- LV Name /dev/ghost/ghost_mimage_1 VG Name ghost LV UUID 5gOfn4-bCNB-tpG3-0gue-hhr9-k63p-E0Jb9U LV Write Access read/write LV Status available # open 1 LV Size 93.00 GB Current LE 23808 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:26 --- Physical volumes --- PV Name /dev/dm-13 PV UUID R5henD-M3pW-dNaH-P4Xs-WBjW-fTWo-52nvTz PV Status allocatable Total PE / Free PE 23840 / 32 PV Name /dev/dm-24 PV UUID 3pPxpD-DX4U-ay6t-Gf2B-RoKP-Wfoz-rN2Usj PV Status allocatable Total PE / Free PE 23841 / 33 Voila. 6. Convert lv to unmirrored, removing the old pv: [root@node11 ~]# lvconvert -m 0 ghost/ghost /dev/dm-13 Logical volume ghost converted. 7. Remove old pv from vg: [root@node11 ~]# vgreduce ghost /dev/dm-13 Removed "/dev/dm-13" from volume group "ghost" Done! If you check the vg now with vgdisplay you'll see it runs nicely on the new pv. Tested on RHEL5.4 using HP SAS hardware. |
all, just thought of giving you an update. This worked like a charm on all servers, except servers using Oracle ASM /RAC cluster where it takes control of the hard disks/LUNs and only Oracle volume manager controls them. In such scenario Linux LVM is of no use and any attempt will corrupt those disks.
Thanks everyone for your support. |
All times are GMT -5. The time now is 03:33 PM. |