LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 12-29-2017, 12:02 PM   #1
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Rep: Reputation: Disabled
Cannot bring LVM online. Too small for target.


Hi Everyone-

This one has me stumped. I had a client with an old ReadyNAS device which was 10+ years old and SPARC based. It uses software RAID and then mounts it via LVM so it can expand if needed. Since it's an older architecture, you can't simply move the drives to a new device as they are all x86 based using a different filesystem. I posted on the Netgear forums and they re-directed me to https://web.archive.org/web/20161212...bserver/?p=306 and https://web.archive.org/web/20160817...th-ubuntu.html which walks you through how to mount the old device as read-only so you can copy the files off. I am doing this with the CentOS 7.4.1708 live CD. I got the fuseext2 rpm from this link: https://centos.pkgs.org/7/forensics-...86_64.rpm.html. Today there are 4x2TB drives:


Code:
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x9b3e07e2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1              32     4096031     2048000   fd  Linux raid autodetect
/dev/sda2         4096032     5144607      524288   fd  Linux raid autodetect
/dev/sda3         5144608  3907010591  1950932992   fd  Linux raid autodetect

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x9b3e07ec

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              32     4096031     2048000   fd  Linux raid autodetect
/dev/sdb2         4096032     5144607      524288   fd  Linux raid autodetect
/dev/sdb3         5144608  3907010591  1950932992   fd  Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x7f30f898

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              32     4096031     2048000   fd  Linux raid autodetect
/dev/sdc2         4096032     5144607      524288   fd  Linux raid autodetect
/dev/sdc3         5144608  3907010591  1950932992   fd  Linux raid autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x7f30f891

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              32     4096031     2048000   fd  Linux raid autodetect
/dev/sdd2         4096032     5144607      524288   fd  Linux raid autodetect
/dev/sdd3         5144608  3907010591  1950932992   fd  Linux raid autodetect
Each drive has three partitions. /dev/sd*1 belong to the OS that runs the NAS and is setup as /dev/md126 as RAID1. /dev/sd*3 is the data and is setup as RAID5. Both software RAIDS are intact:

Code:
cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active (auto-read-only) raid1 sdd1[2] sda1[3] sdc1[1] sdb1[0]
      2047936 blocks [4/4] [UUUU]

md127 : active (auto-read-only) raid5 sdd3[2] sdc3[1] sda3[3] sdb3[0]
      5852786688 blocks level 5, 4096k chunk, algorithm 0 [4/4] [UUUU]

unused devices: <none>
A vgscan shows the group:

Code:
vgscan
  Reading volume groups from cache.
  Found volume group "c" using metadata type lvm2
PVS also shows it as intact:

Code:
pvs
  PV         VG Fmt  Attr PSize PFree
  /dev/md127 c  lvm2 a--  5.45t    0
and lvdisplay:

Code:
lvdisplay c
  --- Logical volume ---
  LV Path                /dev/c/c
  LV Name                c
  VG Name                c
  LV UUID                NlKR34-7CHi-mTLU-McCD-bvOF-3M9m-gqC0TP
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              suspended
  # open                 0
  LV Size                5.45 TiB
  Current LE             178613
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
and vgdisplay:

Code:
vgdisplay
  --- Volume group ---
  VG Name               c
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.45 TiB
  PE Size               32.00 MiB
  Total PE              178613
  Alloc PE / Size       178613 / 5.45 TiB
  Free  PE / Size       0 / 0
  VG UUID               0Nidfo-3d2b-ayh1-yl48-R0HE-8G11-5jEEby
and pvdisplay:

Code:
pvdisplay
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               c
  PV Size               5.45 TiB / not usable <4.19 MiB
  Allocatable           yes (but full)
  PE Size               32.00 MiB
  Total PE              178613
  Free PE               0
  Allocated PE          178613
  PV UUID               izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
When I run vgchange -ay c I get this output:

Code:
vgchange -ay c
  device-mapper: resume ioctl on  (253:3) failed: Invalid argument
  Unable to resume c-c (253:3)
  1 logical volume(s) in volume group "c" now active
I also have this in /var/log/messages:

Code:
Dec 14 18:14:48 localhost kernel: device-mapper: table: 253:3: md127 too small for target: start=384, len=11705581568, dev_size=11705573376
This seems to indicate that the LVM is too large for the physical drive itself. I checked this out and got this:

Code:
[root@localhost ~]# lvs --partial --segments -o+devices /dev/c/c
  PARTIAL MODE. Incomplete logical volumes will be processed.
  WARNING: Cannot find matching striped segment for c/c.
  LV   VG Attr       #Str Type   SSize Devices
  c    c  -wi-XX--X-    1 linear 5.45t /dev/md127(0)
[root@localhost ~]# blockdev --getsize64 /dev/md127
5993253568512
[root@localhost ~]# pvdisplay --units=b /dev/md127
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               c
  PV Size               5993257959424 B  / not usable 4390912 B
  Allocatable           yes (but full)
  PE Size               33554432 B
  Total PE              178613
  Free PE               0
  Allocated PE          178613
  PV UUID               izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4

[root@localhost ~]#
It looks like the PV is about 4MB too large compared with the underlying device which is what is generating the error. Trying to mount it with fuseext2 gives me this:

Code:
[root@localhost ~]# fuseext2 -o ro -o sync_read /dev/c/c /root/mnt
Open_ext2 Error:2
[root@localhost ~]#
Does anyone have any suggestions?

Thanks.

Last edited by JWolberg; 01-02-2018 at 04:08 PM.
 
Old 12-30-2017, 06:23 AM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,140

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
I would try making a new logical volume of the full size and then copy the short one onto it with dd. You shouldn't need to match the RAID or PV. It could be on a RAID 0 with any size disks available.
 
Old 12-30-2017, 06:35 AM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,126

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
I'm not sure you can access the lv to copy it. However I had similar thoughts on imaging the disks, then expanding the images and assembling those as RAID.

Been waiting for rknichols to join in actually ...
 
Old 12-30-2017, 10:05 AM   #4
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by syg00 View Post
Been waiting for rknichols to join in actually ...
Lest my name be taken in vain, ...

What I would try is to use dmsetup to create a new device consisting of the existing /dev/md127 plus an extra ~5MB from either /dev/zero or perhaps a file.
Code:
read x Sectors junk < <(dmsetup status /dev/md127)
echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 zero" | dmsetup create zzbigdev
or
Code:
read x Sectors junk < <(dmsetup status /dev/md127)
truncate --size=$((10240*512)) /var/tmp/padfile
Loop=$(losetup -f --show /var/tmp/padfile)
echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 linear $Loop 0" | dmsetup create zzbigdev
Duplicate PVs will now be found by pvs. Hopefully it will use the last one seen, /dev/mapper/zzbigdev.

Note that the space between the two "<" characters in those read commands is essential.
 
1 members found this post helpful.
Old 12-30-2017, 11:17 AM   #5
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
Lest my name be taken in vain, ...

What I would try is to use dmsetup to create a new device consisting of the existing /dev/md127 plus an extra ~5MB from either /dev/zero or perhaps a file.
Code:
read x Sectors junk < <(dmsetup status /dev/md127)
echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 zero" | dmsetup create zzbigdev
or
Code:
read x Sectors junk < <(dmsetup status /dev/md127)
truncate --size=$((10240*512)) /var/tmp/padfile
Loop=$(losetup -f --show /var/tmp/padfile)
echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 linear $Loop 0" | dmsetup create zzbigdev
Duplicate PVs will now be found by pvs. Hopefully it will use the last one seen, /dev/mapper/zzbigdev.

Note that the space between the two "<" characters in those read commands is essential.
I'll give this a try when I am back in the office on Tuesday. I appreciate the assistance and the commands to use.
 
Old 12-30-2017, 01:38 PM   #6
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Note that if pvs does select the wrong device, you can construct a filter in /etc/lvm.conf to ignore that device. See 6.8. Duplicate PV Warnings for Multipathed Devices for details.
 
Old 01-02-2018, 12:00 PM   #7
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
Note that if pvs does select the wrong device, you can construct a filter in /etc/lvm.conf to ignore that device. See 6.8. Duplicate PV Warnings for Multipathed Devices for details.
Alright. Gave this a shot:

Code:
[root@localhost ~]# read x Sectors junk < <(dmsetup status /dev/md127)
Device md127 not found
Command failed
[root@localhost ~]# dmsetup status
c-c:
Looks like it's not present.

Code:
[root@localhost ~]# pvdisplay
lv  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               c
  PV Size               5.45 TiB / not usable <4.19 MiB
  Allocatable           yes (but full)
  PE Size               32.00 MiB
  Total PE              178613
  Free PE               0
  Allocated PE          178613
  PV UUID               izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4

[root@localhost ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/c/c
  LV Name                c
  VG Name                c
  LV UUID                NlKR34-7CHi-mTLU-McCD-bvOF-3M9m-gqC0TP
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              suspended
  # open                 0
  LV Size                5.45 TiB
  Current LE             178613
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

[root@localhost ~]#
The device is there but it doesn't seem to register it. It's present in /dev

Code:
[root@localhost dev]# echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 linear $Loop 0" | dmsetup create zzbigdev
Invalid format on line 1 of table on stdin
Command failed
[root@localhost dev]# pvs
  PV         VG Fmt  Attr PSize PFree
  /dev/md127 c  lvm2 a--  5.45t    0



Code:
[root@localhost dev]# systemctl status lvm2-pvscan@9\:127.service -l
● lvm2-pvscan@9:127.service - LVM2 PV scan on device 9:127
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2018-01-02 10:44:53 PST; 13min ago
     Docs: man:pvscan(8)
  Process: 4070 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i (code=exited, status=5)
 Main PID: 4070 (code=exited, status=5)

Jan 02 10:44:53 localhost.localdomain systemd[1]: Starting LVM2 PV scan on device 9:127...
Jan 02 10:44:53 localhost.localdomain lvm[4070]: device-mapper: resume ioctl on  (253:0) failed: Invalid argument
Jan 02 10:44:53 localhost.localdomain lvm[4070]: Unable to resume c-c (253:0)
Jan 02 10:44:53 localhost.localdomain lvm[4070]: 1 logical volume(s) in volume group "c" now active
Jan 02 10:44:53 localhost.localdomain lvm[4070]: c: autoactivation failed.
Jan 02 10:44:53 localhost.localdomain systemd[1]: lvm2-pvscan@9:127.service: main process exited, code=exited, status=5/NOTINSTALLED
Jan 02 10:44:53 localhost.localdomain systemd[1]: Failed to start LVM2 PV scan on device 9:127.
Jan 02 10:44:53 localhost.localdomain systemd[1]: Unit lvm2-pvscan@9:127.service entered failed state.
Jan 02 10:44:53 localhost.localdomain systemd[1]: lvm2-pvscan@9:127.service failed.
[root@localhost dev]#

Any idea?

Last edited by JWolberg; 01-02-2018 at 04:09 PM.
 
Old 01-02-2018, 03:48 PM   #8
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Please, please, please do not use [QUOTE]...[/QUOTE] tags for any purpose other than including text from a previous posting. It makes it very difficult to quote from your message in a reply.

Sorry, dmsetup is the wrong command for querying an MD software RAID device. See what "mdadm --detail /dev/md127" reports for "Array Size". Divide that number by 512 and set the Sectors variable to that number.

The "dmsetup create" command is of course going to fail if Sectors has not been set in that echo command.
 
Old 01-02-2018, 04:01 PM   #9
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
I apologize for the quotes. I thought it might have made the command output easier to read.

Code:
[root@localhost /]# mdadm --detail /dev/md127 | grep Array
        Array Size : 5852786688 (5581.65 GiB 5993.25 GB)
[root@localhost /]#
which works out to be 11431224 sectors.

I set the proper variable and re-ran the command:

Code:
[root@localhost /]# echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 linear $Loop 0" | dmsetup create zzbigdev
[root@localhost /]# pvs
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: PV izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4 on /dev/md127 was already found on /dev/mapper/zzbigdev.
  WARNING: PV izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4 prefers device /dev/md127 because device size is correct.
  PV         VG Fmt  Attr PSize PFree
  /dev/md127 c  lvm2 a--  5.45t    0
[root@localhost /]#

I'm not entirely sure what the proper filter would be in this case in /etc/lvm/lvm.conf.

Last edited by JWolberg; 01-02-2018 at 04:10 PM.
 
Old 01-02-2018, 04:05 PM   #10
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,126

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Quote:
Originally Posted by JWolberg View Post
I apologize for the quotes. I thought it might have made the command output easier to read.
Use [code] tags instead - yes, for output as well as code.
 
Old 01-02-2018, 04:11 PM   #11
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by syg00 View Post
Use [code] tags instead - yes, for output as well as code.
I changed all of my quotes to codes. Thanks for the heads up.
 
Old 01-02-2018, 04:29 PM   #12
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
Also, here's a quick output of lsblk showing the duplicate UUID:

Code:
[root@localhost /]# lsblk -f
lsblk: dm-0: failed to get device path
NAME       FSTYPE        LABEL UUID                                   MOUNTPOINT
sda
├─sda1     linux_raid_me       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sda2     linux_raid_me       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sda3     linux_raid_me       c423356a-6ccf-4040-8380-0237772ec525
  └─md127  LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
           LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sdb
├─sdb1     linux_raid_me       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sdb2     linux_raid_me       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sdb3     linux_raid_me       c423356a-6ccf-4040-8380-0237772ec525
  └─md127  LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
           LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sdc
├─sdc1     linux_raid_me       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sdc2     linux_raid_me       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sdc3     linux_raid_me       c423356a-6ccf-4040-8380-0237772ec525
  └─md127  LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
           LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sdd
├─sdd1     linux_raid_me       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sdd2     linux_raid_me       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sdd3     linux_raid_me       c423356a-6ccf-4040-8380-0237772ec525
  └─md127  LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
           LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sde
├─sde1     xfs                 938e082d-2f80-42e0-9738-b4c2e497eb4f   /boot
├─sde2     swap                9a85f1c1-27c4-40fa-86da-1068494bd393   [SWAP]
├─sde3     xfs                 ab32bd8f-f600-49cc-a27a-9201c64a0f60   /
├─sde4
└─sde5     xfs                 149009a3-f7f4-47d1-8186-3b22a947c114   /home
loop0
└─zzbigdev LVM2_member         izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
loop1
[root@localhost /]#
 
Old 01-02-2018, 04:53 PM   #13
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
Okay, I figured out the filter but it looks like the size doesn't match:

Code:
[root@localhost /]# pvs
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: Device /dev/mapper/zzbigdev has size of 11441464 sectors which is smaller than corresponding PV size of 11705573376 sectors. Was device resized?
  One or more devices used as PVs in VG c have changed sizes.
  PV                   VG Fmt  Attr PSize PFree
  /dev/mapper/zzbigdev c  lvm2 a--  5.45t    0
[root@localhost /]#
It looks like maybe my math was wrong when I created the sectors variable?
 
Old 01-02-2018, 06:16 PM   #14
JWolberg
LQ Newbie
 
Registered: Dec 2017
Posts: 13

Original Poster
Rep: Reputation: Disabled
Okay. I managed to get it mounted.

Code:
[root@localhost /]# export Sectors=11705573376
[root@localhost /]# echo $Sectors
11705573376
[root@localhost /]# echo -e "0 $Sectors linear /dev/md127 0\\n$Sectors 10240 zero" | dmsetup create zzbigdev
[root@localhost /]# pvs
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: PV izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4 on /dev/md127 was already found on /dev/mapper/zzbigdev.
  WARNING: PV izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4 prefers device /dev/md127 because device size is correct.
  PV         VG Fmt  Attr PSize PFree
  /dev/md127 c  lvm2 a--  5.45t    0
[root@localhost /]# vi /etc/lvm/lvm.conf
[root@localhost /]# pvs
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  PV                   VG Fmt  Attr PSize PFree
  /dev/mapper/zzbigdev c  lvm2 a--  5.45t    0
[root@localhost /]# lsblk -f
lsblk: dm-0: failed to get device path
NAME      FSTYPE         LABEL UUID                                   MOUNTPOINT
sda
├─sda1    linux_raid_mem       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sda2    linux_raid_mem       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sda3    linux_raid_mem       c423356a-6ccf-4040-8380-0237772ec525
  └─md127 LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
          LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sdb
├─sdb1    linux_raid_mem       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sdb2    linux_raid_mem       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sdb3    linux_raid_mem       c423356a-6ccf-4040-8380-0237772ec525
  └─md127 LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
          LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sdc
├─sdc1    linux_raid_mem       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sdc2    linux_raid_mem       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sdc3    linux_raid_mem       c423356a-6ccf-4040-8380-0237772ec525
  └─md127 LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
          LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sdd
├─sdd1    linux_raid_mem       405d4d6e-3d7b-3855-38c5-5bffbe3d3acf
├─sdd2    linux_raid_mem       9de95ef7-25d0-d5d1-f949-8eac13ed460e
└─sdd3    linux_raid_mem       c423356a-6ccf-4040-8380-0237772ec525
  └─md127 LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
    └─zzbigdev
          LVM2_member          izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4
sde
├─sde1    xfs                  938e082d-2f80-42e0-9738-b4c2e497eb4f   /boot
├─sde2    swap                 9a85f1c1-27c4-40fa-86da-1068494bd393   [SWAP]
├─sde3    xfs                  ab32bd8f-f600-49cc-a27a-9201c64a0f60   /
├─sde4
└─sde5    xfs                  149009a3-f7f4-47d1-8186-3b22a947c114   /home
loop0
loop1
loop2
[root@localhost /]# pvdisplay
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  --- Physical volume ---
  PV Name               /dev/mapper/zzbigdev
  VG Name               c
  PV Size               5.45 TiB / not usable <4.19 MiB
  Allocatable           yes (but full)
  PE Size               32.00 MiB
  Total PE              178613
  Free PE               0
  Allocated PE          178613
  PV UUID               izLiST-kTJw-E77o-8UIf-d6J7-tDt2-ZjKGQ4

[root@localhost /]# vgchange -ay c
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  Attempted to decrement suspended device counter below zero.
  1 logical volume(s) in volume group "c" now active
[root@localhost /]# fuseext2 -o ro -o sync_read /dev/c/c /mnt/lvm/
[root@localhost /]# ls -lah /mnt/lvm/
total 112K
drwxrwxrwx.  6 root root       16K Nov 14 16:26 .
drwxr-xr-x.  3 root root        17 Jan  2 13:17 ..
-rw-------.  1 root root      7.0K Nov 14 16:26 aquota.group
-rw-------.  1 root root      7.0K Nov 14 16:27 aquota.user
drwxr-xr-x.  2   98        98  16K Jul 14  2016 home
drwxrwxrwx. 19 1000 nfsnobody  16K Sep  2 13:38 homeshare
drwx------.  2 root root       16K Jul 14  2016 lost+found
drwxr-xr-x.  2   96 root       16K Jul 14  2016 .timemachine
[root@localhost /]#
I'd love to know wtf happened to get it in this state in the first place. The good news is that at least I can recover my data.
 
1 members found this post helpful.
Old 01-02-2018, 06:27 PM   #15
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
No, I'm miscalculating. Sorry. I just hate commands that don't state what their units are, either in the output or on the manpage. Now I realize the needed number was right in front of me all the time in post #1:
Quote:
Originally Posted by JWolberg View Post
I also have this in /var/log/messages:

Code:
Dec 14 18:14:48 localhost kernel: device-mapper: table: 253:3: md127 too small for target: start=384, len=11705581568, dev_size=11705573376
The right value for Sectors is the dev_size, 11705573376.

I see you figured that out while I was posting.

No idea how it got that way. Perhaps the OS in the NAS was mapping something else into the end of that RAID device.

Glad to see you got there in the end.

Last edited by rknichols; 01-02-2018 at 06:31 PM. Reason: I see you figured that out ...
 
2 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
how to cancel merging in lvm ? (snapshot target support missing from kernel) ali.abry Linux - Newbie 0 02-12-2013 03:45 PM
How big of a target is a small server at home? wulp Linux - Security 4 03-19-2011 01:05 PM
CUPS printer offline - how do I bring it online? exactiv Linux - Hardware 1 05-30-2010 02:42 PM
LVM doesent updates pvsize if pv is iscsi disk and size is increased on target side devkpict Linux - Software 3 07-06-2009 11:01 AM
How to bring my XAMPP server online? pcwillem Linux - Newbie 1 08-25-2005 07:20 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 06:09 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration