Broken LVM Ubuntu Feisty. Recovery Possible?
Hi There,
I previously posted my question here but gave up too easily.. I've done some reading and want to tackle this again, there is 10 years of data there that I have to get back :/ All help gratefully received! Basically, I had 2 mirror LVMs, one called 'ivor' and one called 'engine' (two 3 x 500gb SATA arrays) which backed up to each-other overnight. I had 2 disks fail simultaneously (both drive 3 of each array). After much poking about and futile attempts to get at the data in the meantime I find the following; Code:
root@bernards-server:~# vgdisplay Code:
root@bernards-server:~# lvdisplay The ivor volume group is the right size for an array that lost a disk (i.e. 1gb instead of 1.5gb) so do I just need to recreate the ivor local volume & I should be able to get at the data ? If so, how do I do that ? Recovering either 'engine' or 'ivor' should be feasible as long as I can copy the data off either LVM minus 1 disk.. |
LVM messed up
First... condolences.
Second, since you describe yourself as a noobie you may have one door open to you that you haven't tried: fsck fsck must be run on unmounted media. Get your Feisty 7.04 "Live CD" disk, insert it into your CD drive and boot up. At some point you will be given an opportunity to get to a command line. On the command line, type fsck -N [That -N is very important: it keeps it all a dry run!] Answer Y to everything... this will take some time, so be patient. This will "dry run" fsck on your file systems. You will see stuff jumping up on your screen and at the very end there ought to be a message telling you what fsck "would have" done. If you approve of what it would have done (and this can be dicey, do nothing unless you are sure) type on the command line fsck and reanswer all those questions with Y. The file systems will be repaired (we hope). If you have ANY doubts, ask another Linux user (as in a local Linux User Group) to guide you through. I have repaired file systems using fsck and I have been pleased with the results. Of course, at the end of all this you will possibly find a corrupt file in the midst of your file system - the culprit that started this whole descent into file system hell. See if you can match it up with what it should be... maybe you can repair it (or delete it). I hope this helps you. |
Thanks !
Thanks for the reply :) I was starting to lose hope !
I'll give your suggestion a go & report back. As I said in my 1st post I have 2 broken LVMs, so I do at least get 2 goes at it ;) Cheers. |
Hmmmm
Well I gave it a go. Not much happening to be honest. I tried;
Code:
fsck -N Code:
root@bernards-server:~# fsck -N Code:
fsck -N /dev/sda1 Code:
root@bernards-server:~# fsck -N /dev/sda All drives come up in gparted as /dev/sda, /dev/sdb, /dev/sdc etc but show as an unknown filesystem but with 'flags' set to lvm. Any ideas why fsck can't see them ? |
Look into this?
Here is some of the documentation for reiserfsck (you can find it running a search for "fsck" if you click "System > Help > System documentation". You may wish to do steps 1 and 2, changing "/dev/hda1" to "/dev/sda" or "/dev/sda1" to see if you get some usable information.
EXAMPLE OF USING 1. You think something may be wrong with a reiserfs partition on /dev/hda1 or you would just like to perform a periodic disk check. 2. Run reiserfsck --check --logfile check.log /dev/hda1. If reiserfsck --check exits with status 0 it means no errors were discovered. 3. If reiserfsck --check exits with status 1 (and reports about fixable corruptions) it means that you should run reiserfsck --fix-fixable --logfile fixable.log /dev/hda1. 4. If reiserfsck --check exits with status 2 (and reports about fatal corruptions) it means that you need to run reiserfsck --rebuild-tree. If reiserfsck --check fails in some way you should also run reiserfsck --rebuild-tree, but we also encourage you to submit this as a bug report. 5. Before running reiserfsck --rebuild-tree, please make a backup of the whole partition before proceeding. Then run reiserfsck --rebuild-tree --logfile rebuild.log /dev/hda1. 6. If the --rebuild-tree step fails or does not recover what you expected, please submit this as a bug report. Try to provide as much information as possible and we will try to help solve the problem. SH EXIT CODES eiserfsck uses the following exit codes: 0 - No errors. 1 - Errors found, esierfsck --fix-fixable needs to be launched. 2 - Errors found, esierfsck --rebuild-tree needs to be launched. 8 - Operational error. 16 - Usage or syntax error. Good luck. |
run fsck against the filesystems, not the device drives.
with lvm, this means /dev/vgname/lvname first, you need to fix the lvm errors tho, replace the faulty disk and give it the same uuid with pvcreate. or just force remove it. |
All times are GMT -5. The time now is 02:18 AM. |