LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   Broken LVM Ubuntu Feisty. Recovery Possible? (https://www.linuxquestions.org/questions/linux-general-1/broken-lvm-ubuntu-feisty-recovery-possible-610838/)

neuralnet 01-03-2008 09:46 AM

Broken LVM Ubuntu Feisty. Recovery Possible?
 
Hi There,

I previously posted my question
here but gave up too easily.. I've done some reading and want to tackle this again, there is 10 years of data there that I have to get back :/ All help gratefully received!

Basically, I had 2 mirror LVMs, one called 'ivor' and one called 'engine' (two 3 x 500gb SATA arrays) which backed up to each-other overnight. I had 2 disks fail simultaneously (both drive 3 of each array). After much poking about and futile attempts to get at the data in the meantime I find the following;

Code:

root@bernards-server:~# vgdisplay
  Couldn't find device with uuid 'McsMZm-0flm-g7Fx-0hBU-8hl8-K6GK-cyJDs5'.
  Couldn't find all physical volumes for volume group engine.
  Couldn't find device with uuid 'McsMZm-0flm-g7Fx-0hBU-8hl8-K6GK-cyJDs5'.
  Couldn't find all physical volumes for volume group engine.
  Volume group "engine" not found
  --- Volume group ---
  VG Name              ivor
  System ID           
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  8
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                0
  Open LV              0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size              931.52 GB
  PE Size              4.00 MB
  Total PE              238468
  Alloc PE / Size      0 / 0 
  Free  PE / Size      238468 / 931.52 GB
  VG UUID              TRGfWh-u5AE-xH7B-n8ko-gij5-1DEQ-QrfauN

but if I run lvdisplay I get;

Code:

root@bernards-server:~# lvdisplay
  Couldn't find device with uuid 'McsMZm-0flm-g7Fx-0hBU-8hl8-K6GK-cyJDs5'.
  Couldn't find all physical volumes for volume group engine.
  Couldn't find device with uuid 'McsMZm-0flm-g7Fx-0hBU-8hl8-K6GK-cyJDs5'.
  Couldn't find all physical volumes for volume group engine.
  Volume group "engine" not found

I didn't set up the arrays but I REALLY need to get the data back as the guy that did is liable to loose a bunch of data. I'm an LVM noob so please be gentle ;) I'm scared that I'm gonna lose the data at the next keystroke so I'd be grateful of some good advice.

The ivor volume group is the right size for an array that lost a disk (i.e. 1gb instead of 1.5gb) so do I just need to recreate the ivor local volume & I should be able to get at the data ? If so, how do I do that ?

Recovering either 'engine' or 'ivor' should be feasible as long as I can copy the data off either LVM minus 1 disk..

Caballero del norte 01-05-2008 10:19 PM

LVM messed up
 
First... condolences.

Second, since you describe yourself as a noobie you may have one door open to you that you haven't tried: fsck

fsck must be run on unmounted media.

Get your Feisty 7.04 "Live CD" disk, insert it into your CD drive and boot up. At some point you will be given an opportunity to get to a command line. On the command line, type

fsck -N

[That -N is very important: it keeps it all a dry run!]

Answer Y to everything... this will take some time, so be patient.

This will "dry run" fsck on your file systems. You will see stuff jumping up on your screen and at the very end there ought to be a message telling you what fsck "would have" done.

If you approve of what it would have done (and this can be dicey, do nothing unless you are sure) type on the command line

fsck

and reanswer all those questions with Y.

The file systems will be repaired (we hope). If you have ANY doubts, ask another Linux user (as in a local Linux User Group) to guide you through.

I have repaired file systems using fsck and I have been pleased with the results. Of course, at the end of all this you will possibly find a corrupt file in the midst of your file system - the culprit that started this whole descent into file system hell. See if you can match it up with what it should be... maybe you can repair it (or delete it).

I hope this helps you.

neuralnet 01-07-2008 01:30 PM

Thanks !
 
Thanks for the reply :) I was starting to lose hope !

I'll give your suggestion a go & report back. As I said in my 1st post I have 2 broken LVMs, so I do at least get 2 goes at it ;)

Cheers.

neuralnet 01-07-2008 05:07 PM

Hmmmm
 
Well I gave it a go. Not much happening to be honest. I tried;
Code:

fsck -N
but all I get back is;

Code:

root@bernards-server:~# fsck -N
fsck 1.40-WIP (14-Nov-2006)
[/sbin/fsck.ext3 (1) -- /] fsck.ext3 /dev/hda2

If I specify the device in the path, e.g.

Code:

fsck -N /dev/sda1
I get..

Code:

root@bernards-server:~# fsck -N /dev/sda
fsck 1.40-WIP (14-Nov-2006)
[/sbin/fsck.ext2 (1) -- /dev/sda] fsck.ext2 /dev/sda
root@bernards-server:~# fsck -N /dev/sda1
fsck 1.40-WIP (14-Nov-2006)
[/sbin/fsck.ext2 (1) -- /dev/sda1] fsck.ext2 /dev/sda1

same for all drives.. there are no entries for them in /etc/fstab and they're not mounted.

All drives come up in gparted as /dev/sda, /dev/sdb, /dev/sdc etc but show as an unknown filesystem but with 'flags' set to lvm. Any ideas why fsck can't see them ?

Caballero del norte 01-08-2008 08:15 AM

Look into this?
 
Here is some of the documentation for reiserfsck (you can find it running a search for "fsck" if you click "System > Help > System documentation". You may wish to do steps 1 and 2, changing "/dev/hda1" to "/dev/sda" or "/dev/sda1" to see if you get some usable information.

EXAMPLE OF USING
1. You think something may be wrong with a reiserfs partition on /dev/hda1 or you would just like to perform a periodic disk check.
2. Run reiserfsck --check --logfile check.log /dev/hda1. If reiserfsck --check exits with status 0 it means no errors were discovered.
3. If reiserfsck --check exits with status 1 (and reports about fixable corruptions) it means that you should run reiserfsck --fix-fixable --logfile fixable.log /dev/hda1.
4. If reiserfsck --check exits with status 2 (and reports about fatal corruptions) it means that you need to run reiserfsck --rebuild-tree. If reiserfsck --check fails in some way you should also run reiserfsck --rebuild-tree, but we also encourage you to submit this as a bug report.
5. Before running reiserfsck --rebuild-tree, please make a backup of the whole partition before proceeding. Then run reiserfsck --rebuild-tree --logfile rebuild.log /dev/hda1.
6. If the --rebuild-tree step fails or does not recover what you expected, please submit this as a bug report. Try to provide as much information as possible and we will try to help solve the problem. SH EXIT CODES eiserfsck uses the following exit codes:
0 - No errors.
1 - Errors found, esierfsck --fix-fixable needs to be launched.
2 - Errors found, esierfsck --rebuild-tree needs to be launched.
8 - Operational error.
16 - Usage or syntax error.

Good luck.

varj 08-20-2008 07:21 AM

run fsck against the filesystems, not the device drives.

with lvm, this means /dev/vgname/lvname

first, you need to fix the lvm errors tho, replace the faulty disk and give it the same uuid with pvcreate. or just force remove it.


All times are GMT -5. The time now is 02:18 AM.