LV does not automatically activate after rebooting.
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
# lvdisplay
--- Logical volume ---
LV Path /dev/vg0/qcow2
LV Name qcow2
VG Name vg0
LV UUID y7ZgkI-b0tN-0qIn-fU0m-mkXe-Eg8D-kwMaPY
LV Write Access read/write
LV Creation host, time de.org, 2019-05-16 15:49:41 +0300
LV Status NOT available
LV Size <3.46 TiB
Current LE 906429
Segments 1
Allocation inherit
Read ahead sectors auto
Code:
# systemctl | grep lvm
lvm2-lvmetad.service loaded active running LVM2 metadata daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lvm2-lvmetad.socket loaded active running LVM2 metadata daemon socket
lvm2-lvmpolld.socket
When I manually start the service, LV is activated:
Code:
# systemctl start lvm2-pvscan@9:2 && systemctl enable lvm2-pvscan@9:2
# journalctl -u lvm2-pvscan@9:2
May 16 17:13:34 de.org systemd[1]: Starting LVM2 PV scan on device 9:2...
May 16 17:13:34 de.org lvm[11270]: 1 logical volume(s) in volume group "vg0" now active
May 16 17:13:34 de.org systemd[1]: Started LVM2 PV scan on device 9:2.
Code:
# lvscan
ACTIVE '/dev/vg0/qcow2' [<3.46 TiB] inherit
Code:
# systemctl | grep lvm
lvm2-lvmetad.service loaded active running LVM2 metadata daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lvm2-pvscan@9:2.service loaded active exited LVM2 PV scan on device 9:2
system-lvm2\x2dpvscan.slice loaded active active system-lvm2\x2dpvscan.slice
lvm2-lvmetad.socket loaded active running LVM2 metadata daemon socket
lvm2-lvmpolld.socket loaded active listening LVM2 poll daemon socket
After restarting the server, the service `lvm2-pvscan @ 9: 2` does not automatically start.
Help me, please, why can LV not be automatically activated and how to make the service `lvm2-pvscan@9:2` autostart? What service runs it? Since basically this service starts up after creating an LVM partition, but something is missing and it does not start automatically.
Perhaps a problem in systemd? I included a debug log for LVM and for systemd, but I didn’t find anything to help me in analyzing the problem.
The pvscan is a oneshot service. This means that it is just a command that gets executed doesn't have a daemon that stays running.
It sounds like you need a kernel module loaded or network connection to see the device you are having problems with. I would copy the unit file for /lib/systemd/system/lvm2-pvscan@.service and change the After line to whatever is needed for your system to see the device. Reboot your system, copy /lib/systemd/system/lvm2-pvscan@9:2.service to /etc/systemd/system/ with a new name, make the necessary changes, run systemctl daemon-reload, then systemctl enable what_you_called_it, then systemctl start what_you_called_it. If you see the volume, reboot and make sure it works as intended.
My initial reaction was "who in their right mind would format a lv as an image file ?" .
That's like dd'ing an entire device to a partition on another disk and trying to mount it - ain't gunna work without some intervention first.
A gcow[2] is a file, not a filesystem. I think you need to rethink your setup.
My initial reaction was "who in their right mind would format a lv as an image file ?" .
That's like dd'ing an entire device to a partition on another disk and trying to mount it - ain't gunna work without some intervention first.
A gcow[2] is a file, not a filesystem. I think you need to rethink your setup.
sorry that I misled you
qcow is just the name of my logical volume
I think tyler2016 is on the right track. LVM typically starts on boot before the fileystem checks. If your other PVs/VGs/LVs are coming up after reboot that suggest it is starting and finding those OK. If it is not finding this one automatically it suggests there is something else starting later in Systemd that makes it available so that the manual pvscan finds it.
I've seen setup on some 3rd party flash cards like those from IODrive (now SanDISK) where it has its own files to activate the card that also allow you to do the start of the LVM structures laid out on that card so it isn't found by init scripts or systemd until later in the boot.
What is this particular PV's hardware? Is it different than hardware for other PVs that are being found automatically at boot?
What is this particular PV's hardware? Is it different than hardware for other PVs that are being found automatically at boot?
The fact is that on this server all PVs/VGs/LVs do not start automatically. My volumes auto started on other servers that have the identical hardware configurations and settings, but this sevrer have only one difference... For this server, the root file system was migrated to another server. Migrated via
I understand that the problem in the migration, but I cannot understand where. Because I see a completely identical configuration with other servers but with problem in autostart volumes.
Ouch. I'd not have done it that way. I'd have used something like Mondo to create a bootable backup of the original setup then used it to install the new one. Do you still have the original to do that from?
If not, had you already done a base install on the new system for the boot setup? Have you examined grub setup? Have you examined lvm.conf? It may be things like disk UUIDs and or other structures aren't appreciated by your new system as they're relevant to the old one. It sounds like you're getting it to boot. You might try to boot into single user, save a copy of lvm.conf then do vgscan and vgimport to create a new one.
Ouch. I'd not have done it that way. I'd have used something like Mondo to create a bootable backup of the original setup then used it to install the new one. Do you still have the original to do that from?
If not, had you already done a base install on the new system for the boot setup? Have you examined grub setup? Have you examined lvm.conf? It may be things like disk UUIDs and or other structures aren't appreciated by your new system as they're relevant to the old one. It sounds like you're getting it to boot. You might try to boot into single user, save a copy of lvm.conf then do vgscan and vgimport to create a new one.
I agree. I think your physical volumes have UUIDs that are different from the files in /etc/lvm.
But I decided to unify and make it easier by specifying one rd.auto=1 option instead of the rd.md.uuid options. After enabling this option, the LVM partition is no longer activated. After disabling this option and specifying explicit UUID in rd.md.uuid options for each partiotions then it all worked.
I did not understand the exact reason for such work, but the main thing is that the problem has been solved, and I will go on to read the manuals
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.