How can I move a mounted LV from one VG to another?
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How can I move a mounted LV from one VG to another?
I have several mounted filesystems, each residing on a LV within the same VG.
/dev/PC1_VG/FS1_LV
/dev/PC1_VG/FS2_LV
/dev/PC1_VG/FS3_LV
I need to move one of those FS to a different machine.
I have isolated a set of 4 PVs which fully contain the LV and contain no other LV's on them. 100% of the PEs on each of the 4 PVs is allocated to the single LV.
--- Logical volume ---
LV Name /dev/PC1_VG/FS1_LV
VG Name PC1_VG
--- Segments ---
Logical extent 0 to 131067:
Type striped
Stripes 4
Stripe size 64 KB
Stripe 0:
Physical volume /dev/emcpowerd
Physical extents 0 to 32766
Stripe 1:
Physical volume /dev/emcpowerc
Physical extents 0 to 32766
Stripe 2:
Physical volume /dev/emcpowerb
Physical extents 0 to 32766
Stripe 3:
Physical volume /dev/emcpowera
Physical extents 0 to 32766
In other words, the 4 PVs contain only the LV which I wish to move.
It's my understanding that I can use the vgsplit command to move the LV to a new LV, which will allow me to depresent the LUN's for those 4 PVs and add them to the new host. The new host should properly see the definition for the PVs, VG and LV which now contain just the single FS.
Prior experimentation has shown that if I present all the PVs for a given VG to a new host, that new host will properly recognize the VG and it's associated LV correctly.
My question is: Do I have to umount the FS prior to running the vgsplit command?
Im assuming the answer is yes because the path to the LV will reference the new VG name instead of the old VG name.
The command im thinking of using is:
vgsplit -n /dev/PC1_VG/FS1_LV PC1_VG PC2_VG
From what I recall you don't need to umount the filesystem.
What you need to do is make sure you only have the LV you want to move on 1 PV.
You can do that with pvmove.
After that you can use vgsplit.
Furthermore I believe umounting here shouldn't be a problem (you want to disconnect the volume anyway). Just do the pvmove beforehand.
When you want to actually move, you do the vgsplit. If it is not possible to do this with an actual volume, then umount it, do vgsplit.
Disconnect from SAN
Connect on other node.
pvscan
vgscan
lvscan
vgchange -ay
I have red that in the past you had to have ALL LVs on the VG to be inactive. This would require you to boot with the rescue CD.
However this dates from 2007 so it might be possible now.
From what I can tell from reading, it's only necessary for the PVs im attempting to move only have the single LV that im trying to move to the new VG. I think I can move all 4 PV's together to the new VG based on the manpage.
I did some testing using a VM as a dummy test server, and found that I cannot use the vgsplit if/when the LV is active.
vgsplit -n /dev/PC1_VG/FS1_LV PC1_VG PC1_FS1_VG
Logical volume "FS1_LV" must be inactive
If the operation being attempted does not involve the physical movement of data from one drive to another, it is a simple configuration change that can be performed on-the-fly. If it does involve moving gigabytes, it cannot. It's that simple.
Maybe im missing somthing then. I think I have met all the requirements which I read about in the manual.
The goal is to move the LV to a separate VG so that I can present that VG to a new host and have it pick-up the pre-defined LV on the new machine.
Ive tested (using a VM) the moving of a VG to a new host, and that works fine. The new host sees the LVM configuration on the PV luns when presented and I could mount the FS.
The issue comes up only when the VG has multiple LV's associated to it, then when I present the PV's which are dedicated to the LV, it complains about the other missing LV's which were in the VG. Thus, I am forced to conclude that the LV must be isolated in it's own dedicated VG so the new machine won't have an issue.
I have isolated the LV to a set of PV's
Those PV's don't have any other LV's on them, so they are 100% allocated to the LV and nothing else.
It's the vgsplit command which is telling me the LV must be inactive (lvchange -a n /dev/PC1_VG/FS1_LV)
The problem is, I don't think I can change the state of the LV to inactive unless it's umounted.
I would really like to do this online if possible, to minimize the time the LV is not available on one host or the other host.
Maybe im missing somthing then. I think I have met all the requirements which I read about in the manual.
The goal is to move the LV to a separate VG so that I can present that VG to a new host and have it pick-up the pre-defined LV on the new machine.
Ive tested (using a VM) the moving of a VG to a new host, and that works fine. The new host sees the LVM configuration on the PV luns when presented and I could mount the FS.
The issue comes up only when the VG has multiple LV's associated to it, then when I present the PV's which are dedicated to the LV, it complains about the other missing LV's which were in the VG. Thus, I am forced to conclude that the LV must be isolated in it's own dedicated VG so the new machine won't have an issue.
I have isolated the LV to a set of PV's
Those PV's don't have any other LV's on them, so they are 100% allocated to the LV and nothing else.
It's the vgsplit command which is telling me the LV must be inactive (lvchange -a n /dev/PC1_VG/FS1_LV)
The problem is, I don't think I can change the state of the LV to inactive unless it's umounted.
I would really like to do this online if possible, to minimize the time the LV is not available on one host or the other host.
Yes, as long as your fs is mounted, LVM will detect that and won't continue.
I think it is just not possible online.
If you have the command lines ready the downtime should be very small.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.