Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a debian 7 based physical machine which failed to extend an LVM partition.
I've run this same command several times in the past without any issue, but today it decided to fail like so:
root@tux:~# lvextend -L +100G /dev/mapper/storage-data
Extending logical volume data to 4.79 TiB
device-mapper: resume ioctl on failed: Invalid argument
Unable to resume storage-data (253:1)
Problem reactivating data
libdevmapper exiting with 1 device(s) still suspended.
"storage" is the volume group and "data" the partition name.
/dev/mapper/storage-data is mounted on /data and whenever I try to run an "ls" command (or similar) in there, the terminal simply hangs and never come back.
Even "vgs" hangs!
I have not tried rebooting the box yet, as I'm not in the same location.
What would you suggest I should be doing to try to recover the machine/partition/data please?
Back up everything under /etc/lvm. You may need to replace the current lvm configuration with the previous archived version. You will do this with vgcfgrestore if needed. Before doing anything list the current states and sizes of all block devices (lsblk) and volumes (/etc/lvm/backup) and post them to this thread.
I can explain the "weird" RAID setup. :-)
I am planning to move from a 5x2TB mdadm RAID 6 to a 2x8TB RAID 1 (possibly BTRFS).
So last time a 2TB drive failed, I replaced it with an 8TB instead (with a 2TB partition), while I better plan the migration.
Does "dmsetup info" report any device in the SUSPENDED state? That would explain any command hanging forever when it tried to do any I/O to that device. You can try running "dmsetup resume" on that suspended device, but without knowing just what went wrong with the "resume" when you originally ran the "lvextend" command it's hard to guess what will happen.
If there is a SUSPENDED device, do not try to reboot. A reboot will hang forever due to the suspended device. Only a forced reset or power off will recover the system. Forcing a reboot that way would restart the suspended device, though.
Yes, the LVM volume I run lvextend on is in SUSPENDED state:
#dmsetup info
Name: storage-data
State: SUSPENDED
Read Ahead: 1536
Tables present: LIVE
Open count: 7
Event number: 0
Major, minor: 253, 1
Number of targets: 5
UUID: LVM-Wc2vUQmyzR2Qgn3ysIGVf04Uv5V2SdWGSHE4qN7QHe4jRYZQcKzXnxVEWEtWfdLT
[..]
Also, checking dmesg I found:
"device-mapper: table: 253:1: md127 too small for target: start=10894705152, len=855638016, dev_size=11720243712"
Not sure why it would say this, as the RAID device is bigger than what I've asked.
And even if it wasn't, LVM normally tells you it cannot extend the volume as not enough extends are available...
I need to understand how to revert this first, and then why it happened in the first place.
The investigation continues. Thanks all for you help.
I would try "dmsetup resume storage-data". Then, you should be able to use vgcfgrestore to put the LVM configuration back into it's previous, working state. That should all be safe since you did not enlarge the actual filesystem. You can look at the "description = ..." lines in the files in /etc/lvm/backup to determine which configuration file to use for the restore.
I have no idea what would cause the size anomaly on the RAID device.
So I run:
#vgcfgrestore -v --file /etc/lvm/archive/storage_00119-1931210994.vg storage
Restored volume group storage
which worked fine.
In the meantime I've also found out what actually happened.
mdadm shows the correct size of the array as ~6TB
#mdadm --detail /dev/md/storage|grep -i 'array size'
Array Size : 5860121856 (5588.65 GiB 6000.76 GB)
This is because /dev/md127, at some point in the past, was a 6x2TB array, which I then reduced to 5x2TB without letting LVM know. :-/
I have now fixed it with:
#pvresize -v /dev/md127
Using physical volume(s) on command line
Archiving volume group "storage" metadata (seqno 118).
Resizing volume "/dev/md127" to 15626991104 sectors.
Resizing physical volume /dev/md127 from 0 to 22354 extents.
Updating physical volume "/dev/md127"
Creating volume group backup "/etc/lvm/backup/storage" (seqno 119).
Physical volume "/dev/md127" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
And here is the correct size shown by LVM:
#pvs
PV VG Fmt Attr PSize PFree
/dev/md127 storage lvm2 a-- 5.46t 530.50g
/dev/sdg2 pve lvm2 a-- 111.29g 14.29g
Now lvextend works again:
#lvextend -L +100G /dev/mapper/storage-data
Extending logical volume data to 4.80 TiB
Logical volume data successfully resized
and then extented the filesystem
#resize2fs /dev/mapper/storage-data
I have not rebooted yet, but it all look promising, so I'm gonna mark this as solved.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.