Replacing dead drive in an LVM that consisted of 3 drives
Hi, I have an old computer here running Fedora Core 2 that had 3 hard drives mounted as LVM2.
This is the output from parted for the first disk - Code:
Using /dev/sda Code:
Using /dev/sdb This is what I get for pvscan - Code:
[root@computer ~]# pvscan Code:
[root@computer ~]# lvscan Code:
[root@computer ~]# vgscan I followed the directions titled "Disk Permanently Removed" with hopes that I could at least get access to the data on the two good disks. I followed the directions on there, and got a new 500GB HD (the previous drive was 400GB, they both spun 7200RPM). It wasn't formatted or anything, brand spankin' new right out of the package. Here is its information via parted - Code:
Disk geometry for /dev/sdc: 0.000-476940.023 megabytes So, I continue following directions - Code:
[root@computer ~]# pvcreate --uuid W0EUm5-wP50-qZNu-r81K-VNec-vivn-DDjXAx /dev/sdc Code:
[root@computer ~]# vgcfgrestore vhe8_disks Code:
[root@computer ~]# e2fsck -y /dev/vhe8_disks/data So I tried many different things, including formatting the disk beforehand. I formatted it as ext2 and reiserfs, and the outcome was the same. I went into fdisk and did a few things and this is the output - Code:
Command (m for help): p I can't get the computer to boot without commenting out the volume in fstab, but it won't mount. Is there any way to view the data on the two working disks? I'm stumped. |
You have to resize the filesystem after the pv has been created. The volume group descriptors are probably still referring to the metadata on the old disk, and without the correct metadata you have little hope of finding any files. LVM doesn't need the disks to be partitioned.
You could complete the process of adding the new disk by allocating the physical extents to the new pv and then resizing the filesystem to include that space and see if that helps. You should extend the LV group to include the new disk. At this stage you might also try this and see if you can get the volume to mount. You might get access to some files on the original 2 disks, but you may find files listed that don't actually exist, due to them being on the dead drive. |
@smoker you've lost me. The OP did a vgcfgrestore and vgscan worked - so the meta-data appears valid. Depends if things have changed since the backup was taken.
The error appears to be with the filesystem itself. What f/s was there before the failure ?. Which was he first disk in the vg ?. |
Yeah I think the problem is in the file system. After running through the vgcreate/vgcfgrestore process with a completely new and empty disk, I booted up the computer in Ubuntu Live and looked at the disks with gparted. It showed the file systems on the two working disks as being lvm2, but the new disk as being unallocated.
Next, I tried changing the system id to Linux LVM. Unfortunately I don't have access to this computer at the moment, but one thing I did notice was a slight difference in file systems when I parted on them. I got something like this (like I said, I don't have access to this computer right now, so this isn't completely accurate. Asterisks indicate a specific number I don't know off the top of my head. The important part I want you to notice is in bold). - Code:
Using /dev/sdc Quote:
Quote:
So, I'm only assuming the disk that is currently sda was the first disk in the vg, since it's the disk with the / partition. However, it very well could have been the dead disk, I really don't know. That's one thing that occurred to me, that the superblock could be on the dead disk. Would this matter for a logical volume? There are backups for the metadata stored on the computer at /etc/lvm/backup and /etc/lvm/archive. I'll post them when I go back to work on monday. This would all be very simple if only they'd created backups!! At this point I'm seriously thinking about freezing the dead disk http://www.datarecoverypros.com/hard...ry-freeze.html, hoping it works, and doing a dd! Thanks for all your help everyone, I really appreciate it!:hattip: |
When I had a disk die on me, it was one of a set of 5 in LVM. I added a new disk as per the normal method, forcibly removed the old one from the group, then ran the utility that rebuilds the metadata from the existing file. I lost whatever files were on the dead disk, but I got the rest of them back. Overall, I lost about 5 GB of files from (at the time) an LVM that had 700 GB written to it.
I see the OP has moved disks around, and this could also complicate matters. I did have a howto that I wrote when my disk went bad, but due to an unfortunate (and careless) rm error recently, I lost a lot of text files in my home directory ! The disks don't have to be the same size if you replace them. I suggested adding the new extents and resizing the volume only to get the LV as close to working order as possible (this should also rewrite the metadata properly). vgscan is showing there is a LV there, but it's not mounting. So it seems appropriate to rebuild the LV properly. Otherwise there isn't much point using the new disk at all, just forcibly remove the dead pv from the LV. |
This is the backup file found in /etc/lvm/backup/
Code:
# Generated by LVM2: Fri Oct 15 14:20:06 2004 As you can see, the data is striped. Does this mean that I'll only be able to recover 2/3 of each file? If so, it would seem the only option is to recover the data from the dead disk. |
All times are GMT -5. The time now is 09:43 PM. |