-   Linux - Desktop (
-   -   Problem with LVM partitions (

firewiz87 04-30-2009 04:38 AM

Problem with LVM partitions
I have been using LVM partitions on my Open SUSE 11.0 machine for a while now.... But on a fine day, when booting the system, all i get is the following error:


Reading all physical volumes, This may take a while....
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Volume group "system" not found
Could not find /dev/system/SUSE11.0root.
want me to fall back to /dev/system/SUSE11.0root?(Y/n)

Giving a Y to the above question gives the same error and exits to shell

My knowledge is limited to setting up of LVM....
Any idea why the error occurs?? I am pretty sure its not an HDD error...
There is more than 200 GB of data....
Is there any way to recover without losing data???

Thanx in advance

Dudydoo 04-30-2009 06:24 AM

Looks to me like the drive isn't being seen on boot up. Does the BIOS see it?

tommylovell 04-30-2009 07:46 AM

I don't know SuSE, but in Redhat and Fedora all of the LVM information is kept in /etc/lvm/.

In mine, I have:


[root@athlonz lvm]# ll
total 28
drwx------ 2 root root  4096 2009-04-07 23:04 archive
drwx------ 2 root root  4096 2009-04-07 23:04 backup
drwx------ 2 root root  4096 2009-04-07 23:04 cache
-rw-r--r-- 1 root root 15911 2008-04-02 23:24 lvm.conf

My lvm.conf (default contents) has:

devices {
    dir = "/dev"
    scan = [ "/dev" ]
    preferred_names = [ ]
    filter = [ "a/.*/" ]
    cache_dir = "/etc/lvm/cache"
    cache_file_prefix = ""
    write_cache_state = 1
    sysfs_scan = 1
    md_component_detection = 1
    ignore_suspended_devices = 0

log {
    verbose = 0
    syslog = 1
    overwrite = 0
    level = 0
    indent = 1
    command_names = 0
    prefix = "  "

backup {
    backup = 1
    backup_dir = "/etc/lvm/backup"
    archive = 1
    archive_dir = "/etc/lvm/archive"   
    retain_min = 10
    retain_days = 30

(I've removed other sections and all of the comments.)

This file tells LVM what volumes to scan for physical volumes, tells it to create backup files, etc.

My system has archive and backup files:

[root@athlonz lvm]# ll archive/
total 24
-rw------- 1 root root 1442 2009-03-10 07:21
-rw------- 1 root root 1122 2009-04-07 00:49
-rw------- 1 root root 1151 2009-04-07 00:50
-rw------- 1 root root 1450 2009-04-07 22:23
-rw------- 1 root root 1446 2009-04-07 23:02
-rw------- 1 root root 1444 2009-04-07 23:04
[root@athlonz lvm]# ll backup/
total 8
-rw------- 1 root root 1441 2009-03-10 07:21 vgz00
-rw------- 1 root root 1743 2009-04-07 23:04 vgz01
[root@athlonz lvm]# ll cache/
total 0

I would suggest looking to see if you have /etc/lvm/backup/system or /etc/lvm/archive/system_nnnnn,vg,
files and if so, look within those files to see if the UUID is listed in one of the entries.
That'll give you a idea of the device (partition) that is in question.

You could try:

'pvscan -u -vvv | less' to show you what devices it is scanning, etc.

'vgscan -vvv | less' to show you what it is doing to assemble the volume groups.

Have you tried a 'vgchange -ay' to see what it does?

'vgchange --test --partial -ay' is safe to do, to.

(If you are in a shell, you may need to preceed each command with lvm, eg. 'lvm pvscan -u -vvv')

tommylovell 04-30-2009 11:34 AM

Also, it didn't occur to me to ask. Did you have any hardware changes? I assume no, since you didn't mention any.

firewiz87 05-01-2009 12:32 AM


Originally Posted by tommylovell (Post 3525961)
Also, it didn't occur to me to ask. Did you have any hardware changes?

I have no hardware changes... the problem occurred without any reason which puzzles me....

I have two hard disks, one in which windows is installed (a few partitions from the disk are part of LVM) and the other disk in which Linux is installed....

Both the hard disks can be booted which means that the HDD is ok....

firewiz87 05-01-2009 01:39 AM

Now it seems that two of my logical volumes root and data are missing...
How could that happen???
This is bad.... all my data is also missing now...

None of the files in archive or backup has any entry corresponding to the missing LV root or data.... this is a nightmare!!!

Is there any way to recover de data???

tommylovell 05-01-2009 07:43 AM

If the drive seems healthy physically, then there is some corruption of LVM's metadata.

You should do an "vgscan -vvv 2>&1 | less"
(sorry my earlier advice was missing the redirect of stderr, 2>&1).

Make sure that all of your devices and partitions that are part of LVM show up there.

If you can't find what is missing and correct it, as a last resort you can activate the volume groups with
missing physical volumes by doing "vgchange --partial -ay", and then, of course, mount them with "mount -a".

You may get lucky and if all the extents of your logical volumes are available, you'll be able to access the
data and move it elsewhere, back it up, or whatever.

But correcting the problem would be preferable...

tommylovell 05-01-2009 07:57 AM

Another idea. This may be easier. Try:

vgcfgbackup system -f system_vm_info

grep iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh system_vm_info

If grep finds the UUID, then

less system_vm_info

and search for the UUID with


firewiz87 05-01-2009 01:26 PM

Well the root filesystem was also part of the lvm that crashed..... so i have nothing to work on.... no config files or anything.... had it been the normal partition n not an lvm.... would have been a lot better
Are there any LVM recovery tools?????

Since i dont have the root file systen is re installation the only way?? But that would make me lose all data.... i am hopeful that the data is physically intact in the HDD now... so i would like to re install only if there is no other way

tommylovell 05-01-2009 02:10 PM

You may be able to come up in rescue mode. I'm not sure how it's done in SuSE.

I've done it in Redhat and Fedora and been able to fix LVM problems (like removing a volume
physically before removing it from LVM...) In Fedora, you boot the installation CD and do a
rescue instead of an install. Since rescue uses busybox instead of the normal commands, you
do an 'lvm pvscan', 'lvm vgscan', 'lvm lvdisplay', 'lvm vgchange -ay --partial' rather than
a 'pvscan,. 'vgscan', etc.

If you are unfamiliar with rescue, put in another post to get some advise from SuSE aware folks.

firewiz87 05-02-2009 02:54 AM

Problem solved with no data loss.... :D
Thanx tommylovell for helping me out in those unknown territories.... :)

Well i ll describe the steps i took so that it maybe of some help to people with similar problem.

It seems that the problem was created by a corrupted metafile (as tommylovell pointed out) and also because of a corrupted partition table....

Well after experimenting with the lvm backup and archive files...... it was completely screwed :(
It meant that i had no metafile with my current valid lvm settings (partly because my root partition was part of the LVs that crashed), i could nt even get the initial error i discussed here (missing physical volume)

So i needed a metafile to get me started....

After spending a little time on google, i found that if you don't have a backup, you can re-create the equivalent of an LVM2 backup file by examining the LVM2 header on the disk and editing out the binary stuff.

LVM2 typically keeps copies of the metadata configuration at the beginning of the disk, in the first 255 sectors following the partition table in sector 1 of the disk. See /etc/lvm/lvm.conf and man lvm.conf for more details.

Because each disk sector is typically 512 bytes, reading this area will yield a 128KB file. LVM2 may have stored several different text representations of the LVM2 configuration stored on the partition itself in the first 128KB.

Extract these to an ordinary file as follows:

dd if=/dev/sdb3 bs=512 count=255 skip=1 of=/etc/lvm/backup/system
then edit the file:

vi /etc/lvm/backup/system
You will see some binary gibberish, but look for the bits of plain text. LVM treats this metadata area as a ring buffer, so there may be multiple configuration entries on the disk. On my disk, the first entry had only the details for the physical volume and volume group, and the next entry had the logical volume information. Look for the block of text with the most recent timestamp, and edit out everything except the block of plain text that contains LVM declarations. This has the volume group declarations that include logical volumes information. Fix up physical device declarations if needed. If in doubt, look at the existing /etc/lvm/backup/VolGroup00 file to see what is there. On disk, the text entries are not as nicely formatted and are in a different order than in the normal backup file, but they will do. Save the trimmed configuration as VolGroup01. This file should then look as follows:


system {
id = "xQZqTG-V4wn-DLeQ-bJ0J-GEHB-4teF-A4PPBv"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0

physical_volumes {

pv0 {
id = "tRACEy-cstP-kk18-zQFZ-ErG5-QAIV-YqHItA"
device = "/dev/md2"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 2365

# Generated by LVM2: Sun Feb  5 22:57:19 2006

Once you have a volume group configuration file, migrate the volume group to this system with vgcfgrestore, as shown:


[root@recoverybox ~]# vgcfgrestore -f system system
[root@recoverybox ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "system" using metadata type lvm2
  Found volume group "system" using metadata type lvm2
[root@recoverybox ~]# pvscan
  PV /dev/sdb3    VG VolGroup01  lvm2 [73.91 GB / 32.00 MB free]
  Total: 2 [92.81 GB] / in use: 2 [92.81 GB] / in no VG: 0 [0  ]
[root@recoverybox ~]# vgchange system -a y
  1 logical volume(s) in volume
[root@recoverybox ~]# lvscan
  ACTIVE            '/dev/VolGroup01/LogVol00' [73.88 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol00' [18.38 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [512.00 MB] inherit

Backing Up Recovered Volume Group Configuration


[root@teapot-new ~]# vgcfgbackup
Volume group "system" successfully backed up.
[root@teapot-new ~]# ls -l /etc/lvm/backup/
total 24
-rw-------  1 root root 1350 Feb 10 09:09 system

At this point, you can now mount the volumes and try to get back data....

But this was no end to my problems.... however all this did bring me back to my initial problem of missing physical volume.....
something like this:


Reading all physical volumes, This may take a while....
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Volume group "system" not found
Could not find /dev/system/SUSE11.0root.
want me to fall back to /dev/system/SUSE11.0root?(Y/n)

now on checking the partition tables i found that infact one of my physical volumes was now shown as free space.....that is /dev/sda7 (in my case) was missing

Since i dont like fdisk too much i just used the windows disk manager to create an ntfs partition with the plan to convert it to ext3 later in linux.....
having created the ntfs partition, i now had /dev/sda7 partition

On booting back to linux i was surprised to see that the ntfs partition i just created was taken as a LVM system
(i have no idea why that happened.... it was not even a linux partition....hopefully somebody could explain that)

Anyway all my data is fine n i am the happiest man in the world ;)

Well if the odd acceptance of an ntfs partition had nt taken place, Use pvcreate to restore the metadata:


pvcreate --uuid "<UUID of missing partition>" --restorefile /etc/lvm/backup/backup <PhysicalVolume>
pvcreate only overwrites the lvm metadata areas on disk and doesn't touch the data areas (the logical volumes)


Can somebody tell me why the NTFS partition created was accepted by LVM??? Should i change it back to a linux partition at a later stage?? The data is intact as far as i can see......

All times are GMT -5. The time now is 04:38 PM.