LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   LVM Mount Physical Volume/Logical Volume without a working Volume Group (https://www.linuxquestions.org/questions/linux-newbie-8/lvm-mount-physical-volume-logical-volume-without-a-working-volume-group-845424/)

mpivintis 11-19-2010 09:38 PM

LVM Mount Physical Volume/Logical Volume without a working Volume Group
 
I am attempting to access data on a LVM2 partition that was corrupted from a "pvcreate -ff"

when running pvs, the results look healthy with a Physical volume of 238 GB with no VG(volume Group) that it belongs to.

When the "pvcreate -ff" was performed, the lvdisplay also showed a valid logical volume.

Further attempts to mount the LVM was halted with fear of damaging the partition, which was now already done out of negligence.

The LVM contains a single ext2/3 filesystem unencrypted on one hard drive containing Fedora Code 8. Worst case scenario, I would imagine a ext2/3 drive scan and recovery is possible with getdataback or similar.

The drive contained the OS and root folders, thus the metadata backup of the LVM is unaccessible at the current time.

Initially, I am looking for a solution to place the Physical/Logical volume into a new volume group to access the files. I can connect the drive to a working fedora core 12 machine that i am looking to copy the drive contents onto.

Can anyone help or point me in the right direction?

I can get printouts of 'pvs', but am unsure what other commands can be run without loosing the data.

rayfordj 11-20-2010 10:25 AM

Without the metadata in /etc/lvm, you should pull it from the disk. While some assumptions can be made to try to manually access the extents using dmsetup to bring the filesystem up, it could cause more headache if guess wrong. Although the on-disk LVM2 metadata could be inaccurate too depending on what all was done. You will need to have a generally accurate idea of what the structure looked like to make the decision to trust the information or not.

To get at the on-disk LVM2 metadata:
Code:

# dd if=/dev/sda2 bs=512 count=24 | strings
## OR ##
# dd if=/dev/sda2 bs=512 count=24 | xxd

Where /dev/sda2 is the partition of the PV. The above should run a little beyond the metadata area, but hopefully give us the accurate (what we expect) details of what was on there. Unfortunately, without knowing all of the details and what exactly was done with the 'pvcreate -ff' it may have already wiped the previous information. If the metadata might reside on another PV on another disk in the system we might be able to pull it from there to reference. As far as what other commands can be run without loosing data, any of the read-only commands (pvs, vgs, lvs, {pv,vg,lv}display, ...) should be safe.

mpivintis 11-20-2010 12:58 PM

pvdisplay
"/dev/sda2" is a new physical volume of "232.69 GiB"
--- NEW Physical volume ---
PV Name /dev/sda2
VG Name
PV Size 232.69 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID q22ACD-UP43-s1Vl-tDEn-9bGR-6sGH-haEPJq

[root]# vgdisplay
No volume groups found
[root]# lvdisplay
No volume groups found

mpivintis 11-20-2010 01:05 PM

[root]# dd if=/dev/sda2 bs=512 count=24 | strings
LABELONE
LVM2 001q22ACDUP43s1VltDEn9bGR6sGHhaEPJq
PI,:
5` LVM2 x[5A%r0N*>
VolGroup00 {
id = "8xZnqr-HuAA-fGqj-GvjO-FwOv-ts0n-Ft9GG6"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "DsTaap-qPWK-G9nI-aXW8-B0K5-WQoH-LbI0rf"
device = "/dev/sda2"
status = ["ALLOCATABLE"]
dev_size = 487990440
pe_start = 384
pe_count = 7446
# Generated by LVM2 version 2.02.33 (2008-01-31): Mon Mar 16 17:40:33 2009
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.25-14.fc9.i586 #1 SMP Thu May 1 05:49:25 EDT 2008 i686
creation_time = 1237225233 # Mon Mar 16 17:40:33 2009
VolGroup00 {
id = "8xZnqr-HuAA-fGqj-GvjO-FwOv-ts0n-Ft9GG6"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "DsTaap-qPWK-G9nI-aXW8-B0K5-WQoH-LbI0rf"
device = "/dev/sda2"
status = ["ALLOCATABLE"]
dev_size = 487990440
pe_start = 384
pe_count = 7446
logical_volumes {
LogVol00 {
id = "A6MyyY-Oku3-ltkA-IH13-WFhZ-cFek-ZMmCqk"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 7383
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
# Generated by LVM2 version 2.02.33 (2008-01-31): Mon Mar 16 17:40:33 2009
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.25-14.fc9.i586 #1 SMP Thu May 1 05:49:25 EDT 2008 i686
creation_time = 1237225233 # Mon Mar 16 17:40:33 2009
VolGroup00 {
id = "8xZnqr-HuAA-fGqj-GvjO-FwOv-ts0n-Ft9GG6"
seqno = 3
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
24+0 records in
24+0 records out
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "DsTaap-qPWK-G9nI-aXW8-B0K5-WQoH-LbI0rf"
12288 bytes (12 kB) copieddevice = "/dev/sda2"
status = ["ALLOCATABLE"]
dev_size = 487990440
pe_start = 384
pe_count = 7446
logical_volumes {
LogVol00 {
id = "A6MyyY-Oku3-ltkA-IH13-WFhZ-cFek-ZMmCqk"
status = ["READ", "WRITE", "VISIBLE"]
, 0.000517885 s, 23.7 MB/s
segment_count = 1
segment1 {
start_extent = 0
extent_count = 7383
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
LogVol01 {
id = "d7JL15-l5VN-cnIC-fzvC-LU08-3aNz-B9Qd6Q"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 62
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 7383
# Generated by LVM2 version 2.02.33 (2008-01-31): Mon Mar 16 17:40:33 2009
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.25-14.fc9.i586 #1 SMP Thu May 1 05:49:25 EDT 2008 i686
creation_time = 1237225233 # Mon Mar 16 17:40:33 2009
VolGroup00 {
id = "jkK5zj-JOnc-P03p-e1IX-ANcQ-8Olv-8IOZ9j"
seqno = 4
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "9c0OLA-hS99-B4Hh-ewEM-7nTm-9h0q-lmOfvK"
device = "/dev/sda2"
status = ["ALLOCATABLE"]
dev_size = 487990440
pe_start = 384
pe_count = 7446
logical_volumes {
LogVol00 {
id = "e9Z78b-ytzP-4AeB-EWML-OnJy-TNOW-N1mUZW"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 7383
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
# Generated by LVM2 version 2.02.33 (2008-01-31): Mon Mar 16 17:40:32 2009
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.25-14.fc9.i586 #1 SMP Thu May 1 05:49:25 EDT 2008 i686
creation_time = 1237225232 # Mon Mar 16 17:40:32 2009
VolGroup00 {
id = "jkK5zj-JOnc-P03p-e1IX-ANcQ-8Olv-8IOZ9j"
seqno = 5
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "9c0OLA-hS99-B4Hh-ewEM-7nTm-9h0q-lmOfvK"
device = "/dev/sda2"
status = ["ALLOCATABLE"]
dev_size = 487990440
pe_start = 384
pe_count = 7446
# Generated by LVM2 version 2.02.33 (2008-01-31): Mon Mar 16 17:40:32 2009
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.25-14.fc9.i586 #1 SMP Thu May 1 05:49:25 EDT 2008 i686
creation_time = 1237225232 # Mon Mar 16 17:40:32 2009


What do I do with that now?

rayfordj 11-20-2010 05:25 PM

OK, from what I can tell it looks like you had LogVol00 which was ~230GB, right?
If you are not so interested in recovering the LV as you are in copying the data to your F12 system you can try bringing up the LV directly via dmsetup just to mount and copy data off. That way don't risk further messing with on-disk LVM metadata.

If my math is correct, based on the info provided this should do it:
Code:

# echo 0 483852288 linear /dev/sda2 384 | dmsetup create LQ00
# mount /dev/mapper/LQ00 /mnt/recoverydir/

where /dev/sda2 is the partition of the PV and /mnt/recoverydir/ is a location of your choosing to then copy the data off if it mounts without error.

to remove the device-mapper device after you are done:
Code:

# dmsetup remove LQ00

DrBenzo 01-07-2014 06:57 AM

Quote:

Originally Posted by rayfordj (Post 4165687)
OK, from what I can tell it looks like you had LogVol00 which was ~230GB, right?
If you are not so interested in recovering the LV as you are in copying the data to your F12 system you can try bringing up the LV directly via dmsetup just to mount and copy data off. That way don't risk further messing with on-disk LVM metadata.

If my math is correct, based on the info provided this should do it:
Code:

# echo 0 483852288 linear /dev/sda2 384 | dmsetup create LQ00
# mount /dev/mapper/LQ00 /mnt/recoverydir/

where /dev/sda2 is the partition of the PV and /mnt/recoverydir/ is a location of your choosing to then copy the data off if it mounts without error.

to remove the device-mapper device after you are done:
Code:

# dmsetup remove LQ00

Hi,

sry for digging out this old thread, but i have a similar problem and want to know how you calculated the 483852288 in:

Code:

# echo 0 483852288 linear /dev/sda2 384 | dmsetup create LQ00

rayfordj 01-07-2014 01:54 PM

LV extent count (7383) * extent size (65536)

If you have only a single segment for LV, this is fairly easy. If you have multiple segments, it gets a bit trickier because you'll need to ensure you're stacking them in the correct order and using the right offsets. If you have (or can get) a vgcfgbackup file, feel free to post it in CODE blocks and we can take a look at your particular config. If not, you should be able to dump the same info (less the nice formatting) as noted above which could then be pasted in CODE blocks for review.

DrBenzo 01-08-2014 05:41 PM

Thanks for help and reply, i got two vgcfgbackups:

One with a dying device (/dev/sdb) and a newer one (/dev/sdd).
The dying one (sdb) was rescued using (g)ddrescue to sdd and an image file.

So pvscan shows now using sdd instead of sdb (what is great!):

Code:

# pvscan
  Found duplicate PV ZCYdUtWWuF4aKfrs2N0pjN4imQWI95lw: using /dev/sdd not /dev/sdb
  PV /dev/sdf1  VG data    lvm2 [931,51 GiB / 0    free]
  PV /dev/sdg1  VG data    lvm2 [931,51 GiB / 0    free]
  PV /dev/sdh    VG data    lvm2 [931,51 GiB / 0    free]
  PV /dev/sde    VG data    lvm2 [931,51 GiB / 0    free]
  PV /dev/sdd    VG data    lvm2 [2,73 TiB / 0    free]
  PV /dev/sdc    VG data    lvm2 [2,73 TiB / 0    free]
  PV /dev/sda2  VG system  lvm2 [55,88 GiB / 0    free]
  Total: 7 [9,15 TiB] / in use: 7 [9,15 TiB] / in no VG: 0 [0  ]

The old vgcfgbackup with sdb shows:

Code:

# Generated by LVM2 version 2.02.67(2) (2010-06-04): Wed Dec 11 22:10:21 2013

contents = "Text Format Volume Group"
version = 1

description = "vgcfgbackup -v -f vgcfg_backup.backup"

creation_host = "Server_nVidia"        # Linux Server_nVidia 2.6.34.8-0.2-default #1 SMP 2011-04-06 18:11:26 +0200 i686
creation_time = 1386796221        # Wed Dec 11 22:10:21 2013

data {
        id = "GWtT8k-DjxI-acYk-3Y6n-TMkX-1HH0-zoyIcx"
        seqno = 9
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192                # 4 Megabytes
        max_lv = 0
        max_pv = 0

        physical_volumes {

                pv0 {
                        id = "vH0uO6-N9iY-HGoK-0CX4-1Xpv-7L8i-v40V4Y"
                        device = "/dev/sdf1"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953520002        # 931,511 Gigabytes
                        pe_start = 384
                        pe_count = 238466        # 931,508 Gigabytes
                }

                pv1 {
                        id = "N1A1j0-ZyG4-KxGj-F6PE-tUTw-ZIJX-vUskoP"
                        device = "/dev/sdg1"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953520002        # 931,511 Gigabytes
                        pe_start = 384
                        pe_count = 238466        # 931,508 Gigabytes
                }

                pv2 {
                        id = "9tSm0A-vQ3f-TnuB-9Z7G-Yk8J-K1R3-oTmY8B"
                        device = "/dev/sdh"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953525168        # 931,513 Gigabytes
                        pe_start = 384
                        pe_count = 238467        # 931,512 Gigabytes
                }

                pv3 {
                        id = "0wIY4b-Z6cs-9hNA-eEO2-TLEw-Pe1i-S1D1F3"
                        device = "/dev/sde"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953525168        # 931,513 Gigabytes
                        pe_start = 384
                        pe_count = 238467        # 931,512 Gigabytes
                }

                pv4 {
                        id = "ZCYdUt-WWuF-4aKf-rs2N-0pjN-4imQ-WI95lw"
                        device = "/dev/sdb"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 5860533168        # 2,72902 Terabytes
                        pe_start = 384
                        pe_count = 715397        # 2,72902 Terabytes
                }

                pv5 {
                        id = "x0Qj2A-4kPf-NyIF-KqFR-VHs3-MJE0-QVMsih"
                        device = "/dev/sdc"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 5860533168        # 2,72902 Terabytes
                        pe_start = 384
                        pe_count = 715397        # 2,72902 Terabytes
                }
        }

        logical_volumes {

                GrandCentralStation {
                        id = "JTgKDk-yxQg-mE00-iLin-OXTw-Dusu-EHKXVe"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 6

                        segment1 {
                                start_extent = 0
                                extent_count = 238466        # 931,508 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                        segment2 {
                                start_extent = 238466
                                extent_count = 238466        # 931,508 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 476932
                                extent_count = 238467        # 931,512 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                        segment4 {
                                start_extent = 715399
                                extent_count = 238467        # 931,512 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv3", 0
                                ]
                        }
                        segment5 {
                                start_extent = 953866
                                extent_count = 715397        # 2,72902 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv4", 0
                                ]
                        }
                        segment6 {
                                start_extent = 1669263
                                extent_count = 715397        # 2,72902 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv5", 0
                                ]
                        }
                }
        }
}

And the current one:

Code:

# Generated by LVM2 version 2.02.67(2) (2010-06-04): Sun Dec 22 20:51:55 2013

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgcfgbackup data'"

creation_host = "Server_nVidia"        # Linux Server_nVidia 2.6.34.8-0.2-default #1 SMP 2011-04-06 18:11:26 +0200 i686
creation_time = 1387741915        # Sun Dec 22 20:51:55 2013

data {
        id = "GWtT8k-DjxI-acYk-3Y6n-TMkX-1HH0-zoyIcx"
        seqno = 16
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192                # 4 Megabytes
        max_lv = 0
        max_pv = 0

        physical_volumes {

                pv0 {
                        id = "vH0uO6-N9iY-HGoK-0CX4-1Xpv-7L8i-v40V4Y"
                        device = "/dev/sdf1"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953520002        # 931,511 Gigabytes
                        pe_start = 384
                        pe_count = 238466        # 931,508 Gigabytes
                }

                pv1 {
                        id = "N1A1j0-ZyG4-KxGj-F6PE-tUTw-ZIJX-vUskoP"
                        device = "/dev/sdg1"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953520002        # 931,511 Gigabytes
                        pe_start = 384
                        pe_count = 238466        # 931,508 Gigabytes
                }

                pv2 {
                        id = "9tSm0A-vQ3f-TnuB-9Z7G-Yk8J-K1R3-oTmY8B"
                        device = "/dev/sdh"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953525168        # 931,513 Gigabytes
                        pe_start = 384
                        pe_count = 238467        # 931,512 Gigabytes
                }

                pv3 {
                        id = "0wIY4b-Z6cs-9hNA-eEO2-TLEw-Pe1i-S1D1F3"
                        device = "/dev/sde"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1953525168        # 931,513 Gigabytes
                        pe_start = 384
                        pe_count = 238467        # 931,512 Gigabytes
                }

                pv4 {
                        id = "ZCYdUt-WWuF-4aKf-rs2N-0pjN-4imQ-WI95lw"
                        device = "/dev/sdd"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 5860533168        # 2,72902 Terabytes
                        pe_start = 384
                        pe_count = 715397        # 2,72902 Terabytes
                }

                pv5 {
                        id = "x0Qj2A-4kPf-NyIF-KqFR-VHs3-MJE0-QVMsih"
                        device = "/dev/sdc"        # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 5860533168        # 2,72902 Terabytes
                        pe_start = 384
                        pe_count = 715397        # 2,72902 Terabytes
                }
        }

        logical_volumes {

                GrandCentralStation {
                        id = "JTgKDk-yxQg-mE00-iLin-OXTw-Dusu-EHKXVe"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 6

                        segment1 {
                                start_extent = 0
                                extent_count = 238466        # 931,508 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                        segment2 {
                                start_extent = 238466
                                extent_count = 238466        # 931,508 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 476932
                                extent_count = 238467        # 931,512 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                        segment4 {
                                start_extent = 715399
                                extent_count = 238467        # 931,512 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv3", 0
                                ]
                        }
                        segment5 {
                                start_extent = 953866
                                extent_count = 715397        # 2,72902 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv4", 0
                                ]
                        }
                        segment6 {
                                start_extent = 1669263
                                extent_count = 715397        # 2,72902 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv5", 0
                                ]
                        }
                }
        }
}


rayfordj 01-09-2014 10:56 AM

Code:

$ diff /tmp/old.vg /tmp/new.vg
1c1
< # Generated by LVM2 version 2.02.67(2) (2010-06-04): Wed Dec 11 22:10:21 2013
---
> # Generated by LVM2 version 2.02.67(2) (2010-06-04): Sun Dec 22 20:51:55 2013
6c6
< description = "vgcfgbackup -v -f vgcfg_backup.backup"
---
> description = "Created *after* executing 'vgcfgbackup data'"
9c9
< creation_time = 1386796221    # Wed Dec 11 22:10:21 2013
---
> creation_time = 1387741915    # Sun Dec 22 20:51:55 2013
13c13
<      seqno = 9
---
>      seqno = 16
68c68
<                      device = "/dev/sdb"    # Hint only
---
>                      device = "/dev/sdd"    # Hint only

Looking at a diff of the two, it looks like it's using the ?new? disk /dev/sdd.

Are you having problems assembling the LV? If you've rescued the data from sdb to another disk (sdd); are now using that disk (sdd) as a member of the VG; and your GrandCentralStation LV is assembling correctly, I'm not sure why you'd need to manually build the device using dmsetup directly.

DrBenzo 01-11-2014 05:22 AM

Yes you are right, sdd was added correct to the assembly after i did ddrescue and replaced sdb which was dying. My problem is:

The File-System stopped working (with sdb) so i thought time for an fsck, so i did
Code:

fsck -yv /dev/data/GrandCentralStation
with an unmountet fs.
This was a big big Mistake. It resulted in Files which contains only Zeros (checked with a Hex-Editor) but the right size, names, rights etc.

So i need either the image-file or the old sdb with the correct fs (ext4) mounted alone so i can get to the fs and try to repair it using the journal and backup-superblock (i´m not sure how to this correct right now, either by using fsck options i didn´t read yet, or by using ext4magic, testdisk).

My System is OpenSuse 11.3 with Kernel 2.6.34

I you have any suggestions or question, you´re welcome!

Best Regards

DrBenzo

rayfordj 01-11-2014 07:02 AM

sdb (or its replacement, sdd) isn't a whole filesystem. It is but one of six segments in the data VG that comprise the GrandCentralStation LV which is/was/should-be a complete filesystem. The files of concern may reside entirely, in-part, or not at all on extents that resided on sdb (now sdd).

Any utilities you'd use to carve out whatever data may reside on the dm device you'd create using just sdb should work equally as effective directly on the block device (sdb).

Off-hand, I suggest starting a new thread specific to your problem and leverage the LQ community at-large. Be sure to detail what happened, what steps you've taken, and the current situation (much of which can be copied directly from your post above). Lots of very experienced members willing to help and a thread tailored to your situation will likely gain visibility and assistance greater than continuing in this thread.


All times are GMT -5. The time now is 12:07 AM.