[SOLVED] How to Rescue a VM from LVM storage pool?
Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm looking to see if I can rescue a guest VM from a LVM Storage Pool.
I have a server with 2 disks. The CentOS 6.4 HOST OS was on /dev/sda and I had /dev/sdb dedicated to a LVM based storage pool where I had only one CentOS guest VM.
The HOST OS disk /dev/sda is broken. Doesn't appear to be recoverable and no backups (I know, that will be my next project after I get this all back up and running). All rescue tools I've tried result in many IO errors on sda.
The 2'nd disk sdb is intact but I don't know how to retrieve anything from it.
When I set this up, I followed the RedHat Virtualization Admin manual "12.1.4. LVM-based storage pools". So sdb had one physical volume and a volume group.
I inserted the good sdb disk into another/different healthy CentOS 6.5 server (where it shows up as /dev/sdc). lsblk shows this:
sdc 8:32 0 223.6G 0 disk
\_sdc1 8:33 0 223G 0 part
pvs and vgs don't show anything for the disk I'm trying to recover (sdc on the healthy server).
Is there a way to rescue the healthy disk to retrieve files from it or even boot the guest VM?
pvs and vgs don't show anything for the disk I'm trying to recover (sdc on the healthy server).
Should we infer from this you have a similar setup as the other machine (sda/sdb) ?. Same named vgs and lvs maybe ?.
If so, you can't see the added disk entities until you rename them. Used to be a right PITA, but now I think vgrename, lvrename might help - haven't had a need to try them.
You need may to activate lvm on the disk you're trying to retrieve data from. If so, the command is:
vgchange -a y
There will be a problem if the testing system has any logical volumes with the same name as the recover disk had, if so you could try using a system booted off a live linux CD (fedora live cd works for me).
Should we infer from this you have a similar setup as the other machine (sda/sdb) ?. Same named vgs and lvs maybe ?.
If so, you can't see the added disk entities until you rename them. Used to be a right PITA, but now I think vgrename, lvrename might help - haven't had a need to try them.
The old machine had sda+sdb, where sda is the HOST OS and broken/not-bootable hard drive. sdb is the drive I want to rescue the virtual machine from. In the new machine there is sda+sdb+sdc, where sda is a USB key with ESX on it (not using it, should have removed it), sdb is fresh install of CentOS and sdc is the drive I'm trying to rescue (pulled sdb from the other machine).
The VGs are named differently so shouldn't have collision on names. The VG has the machine name in it and the machines have different names.
pvscan & vgscan didn't show me any other physical volumes or volume groups from sdc.
Am I to presume that when you create PV's/VG's/LV's, that info lives on the originating hard drive the original OS was booted from? This would prevent the hard drives from being portable to other machines. I don't know too much about how PV/VG/LV's work. This would also mean, if you loose the boot OS, you loose ALL the disks in the machine. I can't believe linux would be that crippling - I must be doing something wrong here.
I used a program called testdisk to search for partitions and it found a lvm partition and updated the partition table. After powering down the machine and booting up, pvscan and vgscan found a VG+LV and I'm able to mount it and read all the data.
I doubt I'll be able to boot it though as the /boot partition isn't there. I wish I knew how this disappeared to begin with, but I'm glad I was able to rescue the filesystem to make a backup of it.
fdisk -l /dev/sdc output:
Code:
[root@cent2 mnt]# fdisk -l /dev/sdc
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdc: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 29186 234431063+ ee GPT
from "parted -l"
Code:
Model: ATA MKNSSDCR240GB-DX (scsi)
Disk /dev/sdc: 240GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1066kB 525MB 524MB ext4
2 525MB 239GB 239GB lvm
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.