LVM Mount Physical Volume/Logical Volume without a working Volume Group
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
LVM Mount Physical Volume/Logical Volume without a working Volume Group
I am attempting to access data on a LVM2 partition that was corrupted from a "pvcreate -ff"
when running pvs, the results look healthy with a Physical volume of 238 GB with no VG(volume Group) that it belongs to.
When the "pvcreate -ff" was performed, the lvdisplay also showed a valid logical volume.
Further attempts to mount the LVM was halted with fear of damaging the partition, which was now already done out of negligence.
The LVM contains a single ext2/3 filesystem unencrypted on one hard drive containing Fedora Code 8. Worst case scenario, I would imagine a ext2/3 drive scan and recovery is possible with getdataback or similar.
The drive contained the OS and root folders, thus the metadata backup of the LVM is unaccessible at the current time.
Initially, I am looking for a solution to place the Physical/Logical volume into a new volume group to access the files. I can connect the drive to a working fedora core 12 machine that i am looking to copy the drive contents onto.
Can anyone help or point me in the right direction?
I can get printouts of 'pvs', but am unsure what other commands can be run without loosing the data.
Without the metadata in /etc/lvm, you should pull it from the disk. While some assumptions can be made to try to manually access the extents using dmsetup to bring the filesystem up, it could cause more headache if guess wrong. Although the on-disk LVM2 metadata could be inaccurate too depending on what all was done. You will need to have a generally accurate idea of what the structure looked like to make the decision to trust the information or not.
Where /dev/sda2 is the partition of the PV. The above should run a little beyond the metadata area, but hopefully give us the accurate (what we expect) details of what was on there. Unfortunately, without knowing all of the details and what exactly was done with the 'pvcreate -ff' it may have already wiped the previous information. If the metadata might reside on another PV on another disk in the system we might be able to pull it from there to reference. As far as what other commands can be run without loosing data, any of the read-only commands (pvs, vgs, lvs, {pv,vg,lv}display, ...) should be safe.
pvdisplay
"/dev/sda2" is a new physical volume of "232.69 GiB"
--- NEW Physical volume ---
PV Name /dev/sda2
VG Name
PV Size 232.69 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID q22ACD-UP43-s1Vl-tDEn-9bGR-6sGH-haEPJq
[root]# vgdisplay
No volume groups found
[root]# lvdisplay
No volume groups found
OK, from what I can tell it looks like you had LogVol00 which was ~230GB, right?
If you are not so interested in recovering the LV as you are in copying the data to your F12 system you can try bringing up the LV directly via dmsetup just to mount and copy data off. That way don't risk further messing with on-disk LVM metadata.
If my math is correct, based on the info provided this should do it:
Code:
# echo 0 483852288 linear /dev/sda2 384 | dmsetup create LQ00
# mount /dev/mapper/LQ00 /mnt/recoverydir/
where /dev/sda2 is the partition of the PV and /mnt/recoverydir/ is a location of your choosing to then copy the data off if it mounts without error.
to remove the device-mapper device after you are done:
OK, from what I can tell it looks like you had LogVol00 which was ~230GB, right?
If you are not so interested in recovering the LV as you are in copying the data to your F12 system you can try bringing up the LV directly via dmsetup just to mount and copy data off. That way don't risk further messing with on-disk LVM metadata.
If my math is correct, based on the info provided this should do it:
Code:
# echo 0 483852288 linear /dev/sda2 384 | dmsetup create LQ00
# mount /dev/mapper/LQ00 /mnt/recoverydir/
where /dev/sda2 is the partition of the PV and /mnt/recoverydir/ is a location of your choosing to then copy the data off if it mounts without error.
to remove the device-mapper device after you are done:
Code:
# dmsetup remove LQ00
Hi,
sry for digging out this old thread, but i have a similar problem and want to know how you calculated the 483852288 in:
If you have only a single segment for LV, this is fairly easy. If you have multiple segments, it gets a bit trickier because you'll need to ensure you're stacking them in the correct order and using the right offsets. If you have (or can get) a vgcfgbackup file, feel free to post it in CODE blocks and we can take a look at your particular config. If not, you should be able to dump the same info (less the nice formatting) as noted above which could then be pasted in CODE blocks for review.
$ diff /tmp/old.vg /tmp/new.vg
1c1
< # Generated by LVM2 version 2.02.67(2) (2010-06-04): Wed Dec 11 22:10:21 2013
---
> # Generated by LVM2 version 2.02.67(2) (2010-06-04): Sun Dec 22 20:51:55 2013
6c6
< description = "vgcfgbackup -v -f vgcfg_backup.backup"
---
> description = "Created *after* executing 'vgcfgbackup data'"
9c9
< creation_time = 1386796221 # Wed Dec 11 22:10:21 2013
---
> creation_time = 1387741915 # Sun Dec 22 20:51:55 2013
13c13
< seqno = 9
---
> seqno = 16
68c68
< device = "/dev/sdb" # Hint only
---
> device = "/dev/sdd" # Hint only
Looking at a diff of the two, it looks like it's using the ?new? disk /dev/sdd.
Are you having problems assembling the LV? If you've rescued the data from sdb to another disk (sdd); are now using that disk (sdd) as a member of the VG; and your GrandCentralStation LV is assembling correctly, I'm not sure why you'd need to manually build the device using dmsetup directly.
Yes you are right, sdd was added correct to the assembly after i did ddrescue and replaced sdb which was dying. My problem is:
The File-System stopped working (with sdb) so i thought time for an fsck, so i did
Code:
fsck -yv /dev/data/GrandCentralStation
with an unmountet fs.
This was a big big Mistake. It resulted in Files which contains only Zeros (checked with a Hex-Editor) but the right size, names, rights etc.
So i need either the image-file or the old sdb with the correct fs (ext4) mounted alone so i can get to the fs and try to repair it using the journal and backup-superblock (i´m not sure how to this correct right now, either by using fsck options i didn´t read yet, or by using ext4magic, testdisk).
My System is OpenSuse 11.3 with Kernel 2.6.34
I you have any suggestions or question, you´re welcome!
sdb (or its replacement, sdd) isn't a whole filesystem. It is but one of six segments in the data VG that comprise the GrandCentralStation LV which is/was/should-be a complete filesystem. The files of concern may reside entirely, in-part, or not at all on extents that resided on sdb (now sdd).
Any utilities you'd use to carve out whatever data may reside on the dm device you'd create using just sdb should work equally as effective directly on the block device (sdb).
Off-hand, I suggest starting a new thread specific to your problem and leverage the LQ community at-large. Be sure to detail what happened, what steps you've taken, and the current situation (much of which can be copied directly from your post above). Lots of very experienced members willing to help and a thread tailored to your situation will likely gain visibility and assistance greater than continuing in this thread.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.