Problem solved with no data loss....
Thanx tommylovell for helping me out in those unknown territories....
Well i ll describe the steps i took so that it maybe of some help to people with similar problem.
It seems that the problem was created by a corrupted metafile (as tommylovell pointed out) and also because of a corrupted partition table....
Well after experimenting with the lvm backup and archive files...... it was completely screwed
It meant that i had no metafile with my current valid lvm settings (partly because my root partition was part of the LVs that crashed), i could nt even get the initial error i discussed here (missing physical volume)
So i needed a metafile to get me started....
After spending a little time on google, i found that if you don't have a backup, you can re-create the equivalent of an LVM2 backup file by examining the LVM2 header on the disk and editing out the binary stuff.
LVM2 typically keeps copies of the metadata configuration at the beginning of the disk, in the first 255 sectors following the partition table in sector 1 of the disk. See /etc/lvm/lvm.conf and man lvm.conf for more details.
Because each disk sector is typically 512 bytes, reading this area will yield a 128KB file. LVM2 may have stored several different text representations of the LVM2 configuration stored on the partition itself in the first 128KB.
Extract these to an ordinary file as follows:
Code:
dd if=/dev/sdb3 bs=512 count=255 skip=1 of=/etc/lvm/backup/system
then edit the file:
Code:
vi /etc/lvm/backup/system
You will see some binary gibberish, but look for the bits of plain text. LVM treats this metadata area as a ring buffer, so there may be multiple configuration entries on the disk. On my disk, the first entry had only the details for the physical volume and volume group, and the next entry had the logical volume information. Look for the block of text with the most recent timestamp, and edit out everything except the block of plain text that contains LVM declarations. This has the volume group declarations that include logical volumes information. Fix up physical device declarations if needed. If in doubt, look at the existing /etc/lvm/backup/VolGroup00 file to see what is there. On disk, the text entries are not as nicely formatted and are in a different order than in the normal backup file, but they will do. Save the trimmed configuration as VolGroup01. This file should then look as follows:
Code:
system {
id = "xQZqTG-V4wn-DLeQ-bJ0J-GEHB-4teF-A4PPBv"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "tRACEy-cstP-kk18-zQFZ-ErG5-QAIV-YqHItA"
device = "/dev/md2"
status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 2365
}
}
# Generated by LVM2: Sun Feb 5 22:57:19 2006
Once you have a volume group configuration file, migrate the volume group to this system with vgcfgrestore, as shown:
Code:
[root@recoverybox ~]# vgcfgrestore -f system system
[root@recoverybox ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "system" using metadata type lvm2
Found volume group "system" using metadata type lvm2
[root@recoverybox ~]# pvscan
PV /dev/sdb3 VG VolGroup01 lvm2 [73.91 GB / 32.00 MB free]
Total: 2 [92.81 GB] / in use: 2 [92.81 GB] / in no VG: 0 [0 ]
[root@recoverybox ~]# vgchange system -a y
1 logical volume(s) in volume
[root@recoverybox ~]# lvscan
ACTIVE '/dev/VolGroup01/LogVol00' [73.88 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol00' [18.38 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [512.00 MB] inherit
Backing Up Recovered Volume Group Configuration
Code:
[root@teapot-new ~]# vgcfgbackup
Volume group "system" successfully backed up.
[root@teapot-new ~]# ls -l /etc/lvm/backup/
total 24
-rw------- 1 root root 1350 Feb 10 09:09 system
At this point, you can now mount the volumes and try to get back data....
But this was no end to my problems.... however all this did bring me back to my initial problem of missing physical volume.....
something like this:
Code:
Reading all physical volumes, This may take a while....
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Volume group "system" not found
Could not find /dev/system/SUSE11.0root.
want me to fall back to /dev/system/SUSE11.0root?(Y/n)
now on checking the partition tables i found that infact one of my physical volumes was now shown as free space.....that is /dev/sda7 (in my case) was missing
Since i dont like fdisk too much i just used the windows disk manager to create an ntfs partition with the plan to convert it to ext3 later in linux.....
having created the ntfs partition, i now had /dev/sda7 partition
On booting back to linux i was surprised to see that the ntfs partition i just created was taken as a LVM system
(i have no idea why that happened.... it was not even a linux partition....hopefully somebody could explain that)
Anyway all my data is fine n i am the happiest man in the world
Well if the odd acceptance of an ntfs partition had nt taken place, Use pvcreate to restore the metadata:
Code:
pvcreate --uuid "<UUID of missing partition>" --restorefile /etc/lvm/backup/backup <PhysicalVolume>
pvcreate only overwrites the lvm metadata areas on disk and doesn't touch the data areas (the logical volumes)
References:
http://www.linuxjournal.com/article/8874
http://tldp.org/HOWTO/LVM-HOWTO/recovermetadata.html
Can somebody tell me why the NTFS partition created was accepted by LVM??? Should i change it back to a linux partition at a later stage?? The data is intact as far as i can see......