LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Desktop
User Name
Password
Linux - Desktop This forum is for the discussion of all Linux Software used in a desktop context.

Notices



Reply
 
Search this Thread
Old 04-30-2009, 05:38 AM   #1
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Rep: Reputation: 37
Problem with LVM partitions


I have been using LVM partitions on my Open SUSE 11.0 machine for a while now.... But on a fine day, when booting the system, all i get is the following error:

Code:
Reading all physical volumes, This may take a while....
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Volume group "system" not found
Could not find /dev/system/SUSE11.0root.
want me to fall back to /dev/system/SUSE11.0root?(Y/n)
Giving a Y to the above question gives the same error and exits to shell

My knowledge is limited to setting up of LVM....
Any idea why the error occurs?? I am pretty sure its not an HDD error...
There is more than 200 GB of data....
Is there any way to recover without losing data???

Thanx in advance
 
Old 04-30-2009, 07:24 AM   #2
Dudydoo
Member
 
Registered: Sep 2003
Location: UK
Distribution: I use 'em all ;-)
Posts: 275

Rep: Reputation: 38
Looks to me like the drive isn't being seen on boot up. Does the BIOS see it?
 
Old 04-30-2009, 08:46 AM   #3
tommylovell
Member
 
Registered: Nov 2005
Distribution: Fedora, Redhat
Posts: 372

Rep: Reputation: 101Reputation: 101
I don't know SuSE, but in Redhat and Fedora all of the LVM information is kept in /etc/lvm/.

In mine, I have:

Code:
[root@athlonz lvm]# ll
total 28
drwx------ 2 root root  4096 2009-04-07 23:04 archive
drwx------ 2 root root  4096 2009-04-07 23:04 backup
drwx------ 2 root root  4096 2009-04-07 23:04 cache
-rw-r--r-- 1 root root 15911 2008-04-02 23:24 lvm.conf
My lvm.conf (default contents) has:
Code:
devices {
    dir = "/dev"
    scan = [ "/dev" ]
    preferred_names = [ ]
    filter = [ "a/.*/" ]
    cache_dir = "/etc/lvm/cache"
    cache_file_prefix = ""
    write_cache_state = 1
    sysfs_scan = 1
    md_component_detection = 1
    ignore_suspended_devices = 0
}

log {
    verbose = 0
    syslog = 1
    overwrite = 0
    level = 0
    indent = 1
    command_names = 0
    prefix = "  "
}

backup {
    backup = 1
    backup_dir = "/etc/lvm/backup"
    archive = 1
    archive_dir = "/etc/lvm/archive"    
    retain_min = 10
    retain_days = 30
}

(I've removed other sections and all of the comments.)
This file tells LVM what volumes to scan for physical volumes, tells it to create backup files, etc.

My system has archive and backup files:
Code:
[root@athlonz lvm]# ll archive/
total 24
-rw------- 1 root root 1442 2009-03-10 07:21 vgz00_00000.vg
-rw------- 1 root root 1122 2009-04-07 00:49 vgz01_00000.vg
-rw------- 1 root root 1151 2009-04-07 00:50 vgz01_00001.vg
-rw------- 1 root root 1450 2009-04-07 22:23 vgz01_00002.vg
-rw------- 1 root root 1446 2009-04-07 23:02 vgz01_00003.vg
-rw------- 1 root root 1444 2009-04-07 23:04 vgz01_00004.vg
[root@athlonz lvm]# ll backup/
total 8
-rw------- 1 root root 1441 2009-03-10 07:21 vgz00
-rw------- 1 root root 1743 2009-04-07 23:04 vgz01
[root@athlonz lvm]# ll cache/
total 0
I would suggest looking to see if you have /etc/lvm/backup/system or /etc/lvm/archive/system_nnnnn,vg,
files and if so, look within those files to see if the UUID is listed in one of the entries.
That'll give you a idea of the device (partition) that is in question.

You could try:

'pvscan -u -vvv | less' to show you what devices it is scanning, etc.

'vgscan -vvv | less' to show you what it is doing to assemble the volume groups.

Have you tried a 'vgchange -ay' to see what it does?

'vgchange --test --partial -ay' is safe to do, to.

(If you are in a shell, you may need to preceed each command with lvm, eg. 'lvm pvscan -u -vvv')
 
Old 04-30-2009, 12:34 PM   #4
tommylovell
Member
 
Registered: Nov 2005
Distribution: Fedora, Redhat
Posts: 372

Rep: Reputation: 101Reputation: 101
Also, it didn't occur to me to ask. Did you have any hardware changes? I assume no, since you didn't mention any.
 
Old 05-01-2009, 01:32 AM   #5
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Original Poster
Rep: Reputation: 37
Quote:
Originally Posted by tommylovell View Post
Also, it didn't occur to me to ask. Did you have any hardware changes?
I have no hardware changes... the problem occurred without any reason which puzzles me....

I have two hard disks, one in which windows is installed (a few partitions from the disk are part of LVM) and the other disk in which Linux is installed....

Both the hard disks can be booted which means that the HDD is ok....
 
Old 05-01-2009, 02:39 AM   #6
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Original Poster
Rep: Reputation: 37
Now it seems that two of my logical volumes root and data are missing...
How could that happen???
This is bad.... all my data is also missing now...

None of the files in archive or backup has any entry corresponding to the missing LV root or data.... this is a nightmare!!!

Is there any way to recover de data???

Last edited by firewiz87; 05-01-2009 at 02:48 AM.
 
Old 05-01-2009, 08:43 AM   #7
tommylovell
Member
 
Registered: Nov 2005
Distribution: Fedora, Redhat
Posts: 372

Rep: Reputation: 101Reputation: 101
If the drive seems healthy physically, then there is some corruption of LVM's metadata.

You should do an "vgscan -vvv 2>&1 | less"
(sorry my earlier advice was missing the redirect of stderr, 2>&1).

Make sure that all of your devices and partitions that are part of LVM show up there.

If you can't find what is missing and correct it, as a last resort you can activate the volume groups with
missing physical volumes by doing "vgchange --partial -ay", and then, of course, mount them with "mount -a".

You may get lucky and if all the extents of your logical volumes are available, you'll be able to access the
data and move it elsewhere, back it up, or whatever.

But correcting the problem would be preferable...
 
Old 05-01-2009, 08:57 AM   #8
tommylovell
Member
 
Registered: Nov 2005
Distribution: Fedora, Redhat
Posts: 372

Rep: Reputation: 101Reputation: 101
Another idea. This may be easier. Try:

vgcfgbackup system -f system_vm_info

grep iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh system_vm_info

If grep finds the UUID, then

less system_vm_info

and search for the UUID with

/iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh
 
Old 05-01-2009, 02:26 PM   #9
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Original Poster
Rep: Reputation: 37
Well the root filesystem was also part of the lvm that crashed..... so i have nothing to work on.... no config files or anything.... had it been the normal partition n not an lvm.... would have been a lot better
Are there any LVM recovery tools?????

Since i dont have the root file systen is re installation the only way?? But that would make me lose all data.... i am hopeful that the data is physically intact in the HDD now... so i would like to re install only if there is no other way
 
Old 05-01-2009, 03:10 PM   #10
tommylovell
Member
 
Registered: Nov 2005
Distribution: Fedora, Redhat
Posts: 372

Rep: Reputation: 101Reputation: 101
You may be able to come up in rescue mode. I'm not sure how it's done in SuSE.

I've done it in Redhat and Fedora and been able to fix LVM problems (like removing a volume
physically before removing it from LVM...) In Fedora, you boot the installation CD and do a
rescue instead of an install. Since rescue uses busybox instead of the normal commands, you
do an 'lvm pvscan', 'lvm vgscan', 'lvm lvdisplay', 'lvm vgchange -ay --partial' rather than
a 'pvscan,. 'vgscan', etc.

If you are unfamiliar with rescue, put in another post to get some advise from SuSE aware folks.
 
Old 05-02-2009, 03:54 AM   #11
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Original Poster
Rep: Reputation: 37
Problem solved with no data loss....
Thanx tommylovell for helping me out in those unknown territories....

Well i ll describe the steps i took so that it maybe of some help to people with similar problem.

It seems that the problem was created by a corrupted metafile (as tommylovell pointed out) and also because of a corrupted partition table....

Well after experimenting with the lvm backup and archive files...... it was completely screwed
It meant that i had no metafile with my current valid lvm settings (partly because my root partition was part of the LVs that crashed), i could nt even get the initial error i discussed here (missing physical volume)

So i needed a metafile to get me started....

After spending a little time on google, i found that if you don't have a backup, you can re-create the equivalent of an LVM2 backup file by examining the LVM2 header on the disk and editing out the binary stuff.

LVM2 typically keeps copies of the metadata configuration at the beginning of the disk, in the first 255 sectors following the partition table in sector 1 of the disk. See /etc/lvm/lvm.conf and man lvm.conf for more details.

Because each disk sector is typically 512 bytes, reading this area will yield a 128KB file. LVM2 may have stored several different text representations of the LVM2 configuration stored on the partition itself in the first 128KB.

Extract these to an ordinary file as follows:
Code:
dd if=/dev/sdb3 bs=512 count=255 skip=1 of=/etc/lvm/backup/system
then edit the file:
Code:
vi /etc/lvm/backup/system
You will see some binary gibberish, but look for the bits of plain text. LVM treats this metadata area as a ring buffer, so there may be multiple configuration entries on the disk. On my disk, the first entry had only the details for the physical volume and volume group, and the next entry had the logical volume information. Look for the block of text with the most recent timestamp, and edit out everything except the block of plain text that contains LVM declarations. This has the volume group declarations that include logical volumes information. Fix up physical device declarations if needed. If in doubt, look at the existing /etc/lvm/backup/VolGroup00 file to see what is there. On disk, the text entries are not as nicely formatted and are in a different order than in the normal backup file, but they will do. Save the trimmed configuration as VolGroup01. This file should then look as follows:

Code:
system {
id = "xQZqTG-V4wn-DLeQ-bJ0J-GEHB-4teF-A4PPBv"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0

physical_volumes {

pv0 {
id = "tRACEy-cstP-kk18-zQFZ-ErG5-QAIV-YqHItA"
device = "/dev/md2"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 2365
}
}

# Generated by LVM2: Sun Feb  5 22:57:19 2006
Once you have a volume group configuration file, migrate the volume group to this system with vgcfgrestore, as shown:

Code:
[root@recoverybox ~]# vgcfgrestore -f system system
[root@recoverybox ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "system" using metadata type lvm2
  Found volume group "system" using metadata type lvm2
[root@recoverybox ~]# pvscan
  PV /dev/sdb3    VG VolGroup01   lvm2 [73.91 GB / 32.00 MB free]
  Total: 2 [92.81 GB] / in use: 2 [92.81 GB] / in no VG: 0 [0   ]
[root@recoverybox ~]# vgchange system -a y
  1 logical volume(s) in volume 
[root@recoverybox ~]# lvscan
  ACTIVE            '/dev/VolGroup01/LogVol00' [73.88 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol00' [18.38 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [512.00 MB] inherit

Backing Up Recovered Volume Group Configuration

Code:
[root@teapot-new ~]# vgcfgbackup
Volume group "system" successfully backed up.
[root@teapot-new ~]# ls -l /etc/lvm/backup/
total 24
-rw-------  1 root root 1350 Feb 10 09:09 system
At this point, you can now mount the volumes and try to get back data....

But this was no end to my problems.... however all this did bring me back to my initial problem of missing physical volume.....
something like this:

Code:
Reading all physical volumes, This may take a while....
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Couldnt find device with uuid 'iUDlO-VLkW-4fYN-u3cX-C0zR-lw4f-c7b8Eh'
Couldnt find all physical volumes for volume group system
Volume group "system" not found
Could not find /dev/system/SUSE11.0root.
want me to fall back to /dev/system/SUSE11.0root?(Y/n)
now on checking the partition tables i found that infact one of my physical volumes was now shown as free space.....that is /dev/sda7 (in my case) was missing

Since i dont like fdisk too much i just used the windows disk manager to create an ntfs partition with the plan to convert it to ext3 later in linux.....
having created the ntfs partition, i now had /dev/sda7 partition

On booting back to linux i was surprised to see that the ntfs partition i just created was taken as a LVM system
(i have no idea why that happened.... it was not even a linux partition....hopefully somebody could explain that)

Anyway all my data is fine n i am the happiest man in the world

Well if the odd acceptance of an ntfs partition had nt taken place, Use pvcreate to restore the metadata:

Code:
pvcreate --uuid "<UUID of missing partition>" --restorefile /etc/lvm/backup/backup <PhysicalVolume>
pvcreate only overwrites the lvm metadata areas on disk and doesn't touch the data areas (the logical volumes)

References:
http://www.linuxjournal.com/article/8874
http://tldp.org/HOWTO/LVM-HOWTO/recovermetadata.html

Can somebody tell me why the NTFS partition created was accepted by LVM??? Should i change it back to a linux partition at a later stage?? The data is intact as far as i can see......

Last edited by firewiz87; 05-02-2009 at 04:06 AM.
 
  


Reply

Tags
lvm


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Problem with LVM partitions firewiz87 Suse/Novell 11 05-13-2009 01:42 AM
Partitions LVM Kryptos Linux - Newbie 1 11-18-2007 12:54 AM
cloning LVM partitions deesto Linux - General 0 07-12-2007 12:38 PM
LXer: Back Up (And Restore) LVM Partitions With LVM Snapshots LXer Syndicated Linux News 0 04-17-2007 12:16 PM
how do I add partitions to drives that have Logical Volume (LVM) partitions? The MJ Linux - Software 5 08-17-2006 07:15 PM


All times are GMT -5. The time now is 08:56 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration