LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 04-27-2017, 09:42 PM   #1
^andrea^
Member
 
Registered: Mar 2011
Distribution: Arch Linux
Posts: 53

Rep: Reputation: 0
Extending pertition with lvextend failed


Hi,

I have a debian 7 based physical machine which failed to extend an LVM partition.
I've run this same command several times in the past without any issue, but today it decided to fail like so:

root@tux:~# lvextend -L +100G /dev/mapper/storage-data
Extending logical volume data to 4.79 TiB
device-mapper: resume ioctl on failed: Invalid argument
Unable to resume storage-data (253:1)
Problem reactivating data
libdevmapper exiting with 1 device(s) still suspended.

"storage" is the volume group and "data" the partition name.

/dev/mapper/storage-data is mounted on /data and whenever I try to run an "ls" command (or similar) in there, the terminal simply hangs and never come back.
Even "vgs" hangs!

I have not tried rebooting the box yet, as I'm not in the same location.

What would you suggest I should be doing to try to recover the machine/partition/data please?

Any suggestion would be much appreciated.

Regards,
Andrea
 
Old 04-29-2017, 07:32 AM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,147

Rep: Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264
Back up everything under /etc/lvm. You may need to replace the current lvm configuration with the previous archived version. You will do this with vgcfgrestore if needed. Before doing anything list the current states and sizes of all block devices (lsblk) and volumes (/etc/lvm/backup) and post them to this thread.
 
1 members found this post helpful.
Old 04-29-2017, 05:24 PM   #3
^andrea^
Member
 
Registered: Mar 2011
Distribution: Arch Linux
Posts: 53

Original Poster
Rep: Reputation: 0
#backup taken
cp -a /etc/lvm/ /etc/lvm.20170429.bak


#lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
`-sda1 8:1 0 1.8T 0 part
`-md127 9:127 0 5.5T 0 raid6
|-storage-data (dm-1) 253:1 0 4.7T 0 lvm /data
|-storage-backup (dm-2) 253:2 0 25G 0 lvm /backup
`-storage-home (dm-3) 253:3 0 225G 0 lvm /home
sdb 8:16 0 1.8T 0 disk
`-sdb1 8:17 0 1.8T 0 part
`-md127 9:127 0 5.5T 0 raid6

|-storage-data (dm-1) 253:1 0 4.7T 0 lvm /data
|-storage-backup (dm-2) 253:2 0 25G 0 lvm /backup
`-storage-home (dm-3) 253:3 0 225G 0 lvm /home
sdc 8:32 0 1.8T 0 disk
`-sdc1 8:33 0 1.8T 0 part
`-md127 9:127 0 5.5T 0 raid6
|-storage-data (dm-1) 253:1 0 4.7T 0 lvm /data
|-storage-backup (dm-2) 253:2 0 25G 0 lvm /backup
`-storage-home (dm-3) 253:3 0 225G 0 lvm /home
sdd 8:48 0 7.3T 0 disk
`-sdd1 8:49 0 7.3T 0 part
sde 8:64 0 7.3T 0 disk
|-sde1 8:65 0 5.5T 0 part
`-sde2 8:66 0 1.8T 0 part
`-md127 9:127 0 5.5T 0 raid6
|-storage-data (dm-1) 253:1 0 4.7T 0 lvm /data
|-storage-backup (dm-2) 253:2 0 25G 0 lvm /backup
`-storage-home (dm-3) 253:3 0 225G 0 lvm /home
sdf 8:80 0 1.8T 0 disk
`-sdf1 8:81 0 1.8T 0 part
`-md127 9:127 0 5.5T 0 raid6
|-storage-data (dm-1) 253:1 0 4.7T 0 lvm /data
|-storage-backup (dm-2) 253:2 0 25G 0 lvm /backup
`-storage-home (dm-3) 253:3 0 225G 0 lvm /home
sdg 8:96 0 111.8G 0 disk
|-sdg1 8:97 0 511M 0 part /boot
`-sdg2 8:98 0 111.3G 0 part
|-pve-root (dm-0) 253:0 0 10G 0 lvm /
|-pve-swap (dm-4) 253:4 0 7G 0 lvm [SWAP]
`-pve-data (dm-5) 253:5 0 80G 0 lvm /var/lib/vz
sdh 8:112 0 111.8G 0 disk
`-sdh1 8:113 0 111.8G 0 part


ls -lah /etc/lvm/backup
total 16K
drwx------ 2 root root 4.0K Apr 30 00:04 .
drwxr-xr-x 5 root root 4.0K Sep 11 2013 ..
-rw------- 1 root root 2.3K Apr 16 02:01 pve
-rw------- 1 root root 2.9K Apr 26 03:10 storage


This is the content related to the "data" logical volume in /etc/lvm/backupstorga

data {
id = "SHE4qN-7QHe-4jRY-ZQcK-zXnx-VEWE-tWfdLT"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "tux"
creation_time = 1369919038 # 2013-05-30 15:03:58 +0200
segment_count = 5

segment1 {
start_extent = 0
extent_count = 12400 # 3.02734 Terabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 40
]
}
segment2 {
start_extent = 12400
extent_count = 2000 # 500 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 13360
]
}
segment3 {
start_extent = 14400
extent_count = 1200 # 300 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 15560
]
}
segment4 {
start_extent = 15600
extent_count = 2400 # 600 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 18180
]


Please let me know whether you need anything else and the possible next steps please.

Your help is much appreciated.

Regards,
Andrea

Last edited by ^andrea^; 04-30-2017 at 06:12 AM.
 
Old 05-01-2017, 07:42 AM   #4
mikenash
Member
 
Registered: Dec 2014
Posts: 84

Rep: Reputation: Disabled
I believe you are required to resize after the lvextend.
 
Old 05-01-2017, 03:43 PM   #5
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,147

Rep: Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264
I'm confused about your RAID. Can you post the contents of /proc/mdstat?
 
Old 05-03-2017, 04:04 AM   #6
^andrea^
Member
 
Registered: Mar 2011
Distribution: Arch Linux
Posts: 53

Original Poster
Rep: Reputation: 0
Hi,

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 sde2[10] sdb1[7] sdc1[8] sda1[6] sdf1[1]
5860121856 blocks super 1.2 level 6, 128k chunk, algorithm 2 [5/5] [UUUUU]
bitmap: 0/466 pages [0KB], 2048KB chunk, file: /var/mdadm_storage_bitmap.bin

unused devices: <none>


I can explain the "weird" RAID setup. :-)
I am planning to move from a 5x2TB mdadm RAID 6 to a 2x8TB RAID 1 (possibly BTRFS).
So last time a 2TB drive failed, I replaced it with an 8TB instead (with a 2TB partition), while I better plan the migration.

Regards,
Andrea
 
Old 05-03-2017, 07:09 PM   #7
^andrea^
Member
 
Registered: Mar 2011
Distribution: Arch Linux
Posts: 53

Original Poster
Rep: Reputation: 0
I tried listing all the backup with vgcfgrestore like:
vgcfgrestore -l storage
File: /etc/lvm/archive/storage_00110-671336618.vg

This is all I get before it hangs.

Basically every time I try to run any LVM command it hangs forever, and I need to start a new shell.

Any idea on how to get back control on LVM?
Unfortunately / runs on LVM too, even though it's on a different volume group.

Regards,
Andrea
 
Old 05-03-2017, 10:30 PM   #8
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,781

Rep: Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214
Does "dmsetup info" report any device in the SUSPENDED state? That would explain any command hanging forever when it tried to do any I/O to that device. You can try running "dmsetup resume" on that suspended device, but without knowing just what went wrong with the "resume" when you originally ran the "lvextend" command it's hard to guess what will happen.

If there is a SUSPENDED device, do not try to reboot. A reboot will hang forever due to the suspended device. Only a forced reset or power off will recover the system. Forcing a reboot that way would restart the suspended device, though.
 
1 members found this post helpful.
Old 05-04-2017, 06:16 AM   #9
^andrea^
Member
 
Registered: Mar 2011
Distribution: Arch Linux
Posts: 53

Original Poster
Rep: Reputation: 0
Hi rknichols,

Yes, the LVM volume I run lvextend on is in SUSPENDED state:

#dmsetup info
Name: storage-data
State: SUSPENDED
Read Ahead: 1536
Tables present: LIVE
Open count: 7
Event number: 0
Major, minor: 253, 1
Number of targets: 5
UUID: LVM-Wc2vUQmyzR2Qgn3ysIGVf04Uv5V2SdWGSHE4qN7QHe4jRYZQcKzXnxVEWEtWfdLT
[..]


Also, checking dmesg I found:
"device-mapper: table: 253:1: md127 too small for target: start=10894705152, len=855638016, dev_size=11720243712"

Not sure why it would say this, as the RAID device is bigger than what I've asked.
And even if it wasn't, LVM normally tells you it cannot extend the volume as not enough extends are available...

I need to understand how to revert this first, and then why it happened in the first place.
The investigation continues. Thanks all for you help.

Regards,
Andrea

Last edited by ^andrea^; 05-04-2017 at 12:14 PM.
 
Old 05-04-2017, 08:48 AM   #10
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,781

Rep: Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214
I would try "dmsetup resume storage-data". Then, you should be able to use vgcfgrestore to put the LVM configuration back into it's previous, working state. That should all be safe since you did not enlarge the actual filesystem. You can look at the "description = ..." lines in the files in /etc/lvm/backup to determine which configuration file to use for the restore.

I have no idea what would cause the size anomaly on the RAID device.
 
1 members found this post helpful.
Old 05-06-2017, 12:14 AM   #11
^andrea^
Member
 
Registered: Mar 2011
Distribution: Arch Linux
Posts: 53

Original Poster
Rep: Reputation: 0
I think we nailed it! :-D

So, "dmsetup resume storage-data" gave me back control as it re-activated storage-data.

The last two entries of vgcfgrestore on "storage" were:
#vgcfgrestore -l storage
File: /etc/lvm/archive/storage_00119-1931210994.vg
VG name: storage
Description: Created *before* executing 'lvextend -L +100G /dev/mapper/storage-data'
Backup Time: Wed Apr 26 03:10:55 2017

File: /etc/lvm/backup/storage
VG name: storage
Description: Created *after* executing 'lvextend -L +100G /dev/mapper/storage-data'
Backup Time: Wed Apr 26 03:10:55 2017

So I run:
#vgcfgrestore -v --file /etc/lvm/archive/storage_00119-1931210994.vg storage
Restored volume group storage
which worked fine.


In the meantime I've also found out what actually happened.
mdadm shows the correct size of the array as ~6TB
#mdadm --detail /dev/md/storage|grep -i 'array size'
Array Size : 5860121856 (5588.65 GiB 6000.76 GB)

while LVM thinks it's over 7TB:
#pvs
PV VG Fmt Attr PSize PFree
/dev/md127 storage lvm2 a-- 7.28t 2.24t
/dev/sdg2 pve lvm2 a-- 111.29g 14.29g


This is because /dev/md127, at some point in the past, was a 6x2TB array, which I then reduced to 5x2TB without letting LVM know. :-/

I have now fixed it with:
#pvresize -v /dev/md127
Using physical volume(s) on command line
Archiving volume group "storage" metadata (seqno 118).
Resizing volume "/dev/md127" to 15626991104 sectors.
Resizing physical volume /dev/md127 from 0 to 22354 extents.
Updating physical volume "/dev/md127"
Creating volume group backup "/etc/lvm/backup/storage" (seqno 119).
Physical volume "/dev/md127" changed
1 physical volume(s) resized / 0 physical volume(s) not resized

And here is the correct size shown by LVM:
#pvs
PV VG Fmt Attr PSize PFree
/dev/md127 storage lvm2 a-- 5.46t 530.50g
/dev/sdg2 pve lvm2 a-- 111.29g 14.29g


Now lvextend works again:
#lvextend -L +100G /dev/mapper/storage-data
Extending logical volume data to 4.80 TiB
Logical volume data successfully resized
and then extented the filesystem
#resize2fs /dev/mapper/storage-data


I have not rebooted yet, but it all look promising, so I'm gonna mark this as solved.

Thank you all for the support!

Regards,
Andrea
 
Old 05-06-2017, 09:05 AM   #12
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,781

Rep: Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214Reputation: 2214
Sounds good!. Great detective work on the RAID size issue, too! Thanks for the followup.
 
  


Reply

Tags
data recovery, data-loss, lvm, lvm2, storage



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
extending volume group vs extending physical volume which is better ? Gil@LQ Linux - Server 2 08-19-2013 10:13 AM
regarding lvextend sushdba Linux - Newbie 3 06-17-2013 01:13 AM
lvextend? sachinh Linux - General 3 09-22-2009 07:25 AM
Extend /var pertition sanjee Linux - Server 5 01-14-2009 11:46 PM
pertition reset bruse Linux - Newbie 2 06-17-2005 07:42 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 05:48 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration