LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   LV does not automatically activate after rebooting. (https://www.linuxquestions.org/questions/linux-general-1/lv-does-not-automatically-activate-after-rebooting-4175654069/)

krutik 05-17-2019 03:53 AM

LV does not automatically activate after rebooting.
 
Hello.

I have a problem that I can't solve. LV does not automatically activate after OS boot:
Code:

# lvscan
  inactive          '/dev/vg0/qcow2' [<3.46 TiB] inherit


Code:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg0/qcow2
  LV Name                qcow2
  VG Name                vg0
  LV UUID                y7ZgkI-b0tN-0qIn-fU0m-mkXe-Eg8D-kwMaPY
  LV Write Access        read/write
  LV Creation host, time de.org, 2019-05-16 15:49:41 +0300
  LV Status              NOT available
  LV Size                <3.46 TiB
  Current LE            906429
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto

Code:

# systemctl | grep lvm
  lvm2-lvmetad.service                                                                    loaded active running  LVM2 metadata daemon
  lvm2-monitor.service                                                                    loaded active exited    Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
  lvm2-lvmetad.socket                                                                      loaded active running  LVM2 metadata daemon socket
  lvm2-lvmpolld.socket

When I manually start the service, LV is activated:
Code:

# systemctl start lvm2-pvscan@9:2 && systemctl enable lvm2-pvscan@9:2

# journalctl -u lvm2-pvscan@9:2
May 16 17:13:34 de.org systemd[1]: Starting LVM2 PV scan on device 9:2...
May 16 17:13:34 de.org lvm[11270]: 1 logical volume(s) in volume group "vg0" now active
May 16 17:13:34 de.org systemd[1]: Started LVM2 PV scan on device 9:2.

Code:

# lvscan
  ACTIVE          '/dev/vg0/qcow2' [<3.46 TiB] inherit

Code:

# systemctl | grep lvm
  lvm2-lvmetad.service                                                                    loaded active running  LVM2 metadata daemon
  lvm2-monitor.service                                                                    loaded active exited    Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
  lvm2-pvscan@9:2.service                                                                  loaded active exited    LVM2 PV scan on device 9:2
  system-lvm2\x2dpvscan.slice                                                              loaded active active    system-lvm2\x2dpvscan.slice
  lvm2-lvmetad.socket                                                                      loaded active running  LVM2 metadata daemon socket
  lvm2-lvmpolld.socket                                                                    loaded active listening LVM2 poll daemon socket

After restarting the server, the service `lvm2-pvscan @ 9: 2` does not automatically start.

Help me, please, why can LV not be automatically activated and how to make the service `lvm2-pvscan@9:2` autostart? What service runs it? Since basically this service starts up after creating an LVM partition, but something is missing and it does not start automatically.

Perhaps a problem in systemd? I included a debug log for LVM and for systemd, but I didn’t find anything to help me in analyzing the problem.

Thanks.

tyler2016 05-17-2019 05:41 AM

The pvscan is a oneshot service. This means that it is just a command that gets executed doesn't have a daemon that stays running.

It sounds like you need a kernel module loaded or network connection to see the device you are having problems with. I would copy the unit file for /lib/systemd/system/lvm2-pvscan@.service and change the After line to whatever is needed for your system to see the device. Reboot your system, copy /lib/systemd/system/lvm2-pvscan@9:2.service to /etc/systemd/system/ with a new name, make the necessary changes, run systemctl daemon-reload, then systemctl enable what_you_called_it, then systemctl start what_you_called_it. If you see the volume, reboot and make sure it works as intended.

syg00 05-17-2019 06:37 AM

My initial reaction was "who in their right mind would format a lv as an image file ?" .
That's like dd'ing an entire device to a partition on another disk and trying to mount it - ain't gunna work without some intervention first.

A gcow[2] is a file, not a filesystem. I think you need to rethink your setup.

krutik 05-17-2019 07:54 AM

Quote:

Originally Posted by syg00 (Post 5996040)
My initial reaction was "who in their right mind would format a lv as an image file ?" .
That's like dd'ing an entire device to a partition on another disk and trying to mount it - ain't gunna work without some intervention first.

A gcow[2] is a file, not a filesystem. I think you need to rethink your setup.

sorry that I misled you :)
qcow is just the name of my logical volume

syg00 05-17-2019 08:22 AM

It has a major:minor and needs a pvscan hmmm ... - what does "lsblk -f" return ?.

krutik 05-17-2019 08:57 AM

Quote:

Originally Posted by syg00 (Post 5996080)
It has a major:minor and needs a pvscan hmmm ... - what does "lsblk -f" return ?.

Code:

# lsblk -f
NAME    FSTYPE            LABEL              UUID                                  MOUNTPOINT
sda                                                                               
├─sda1  linux_raid_member de.org:0 5defa2e8-f792-504d-bd5b-d6b3a4466142 
│ └─md0 ext4                                48679e49-079e-43e4-9a82-7f9c6a182e39  /
├─sda2  linux_raid_member de.org:1 c38d269e-9214-5f0c-cd21-3e3639224e13 
│ └─md1 swap                                519479d1-6b8b-486a-a8a9-c864cb4a2a21  [SWAP]
└─sda3  linux_raid_member de.org:2 08358a46-e38b-3460-a5e8-36fbe6845bbc 
  └─md2 LVM2_member                          4aNKmU-snmD-9Qz6-SdQ6-4FK4-1K3G-WvsXGE
sdb                                                                               
├─sdb1  linux_raid_member de.org:0 5defa2e8-f792-504d-bd5b-d6b3a4466142 
│ └─md0 ext4                                48679e49-079e-43e4-9a82-7f9c6a182e39  /
├─sdb2  linux_raid_member de.org:1 c38d269e-9214-5f0c-cd21-3e3639224e13 
│ └─md1 swap                                519479d1-6b8b-486a-a8a9-c864cb4a2a21  [SWAP]
└─sdb3  linux_raid_member de.org:2 08358a46-e38b-3460-a5e8-36fbe6845bbc 
  └─md2 LVM2_member                          4aNKmU-snmD-9Qz6-SdQ6-4FK4-1K3G-WvsXGE
sdc                                                                               
├─sdc1  linux_raid_member de.org:0 5defa2e8-f792-504d-bd5b-d6b3a4466142 
│ └─md0 ext4                                48679e49-079e-43e4-9a82-7f9c6a182e39  /
├─sdc2  linux_raid_member de.org:1 c38d269e-9214-5f0c-cd21-3e3639224e13 
│ └─md1 swap                                519479d1-6b8b-486a-a8a9-c864cb4a2a21  [SWAP]
└─sdc3  linux_raid_member de.org:2 08358a46-e38b-3460-a5e8-36fbe6845bbc 
  └─md2 LVM2_member                          4aNKmU-snmD-9Qz6-SdQ6-4FK4-1K3G-WvsXGE
sdd                                                                               
├─sdd1  linux_raid_member de.org:0 5defa2e8-f792-504d-bd5b-d6b3a4466142 
│ └─md0 ext4                                48679e49-079e-43e4-9a82-7f9c6a182e39  /
├─sdd2  linux_raid_member de.org:1 c38d269e-9214-5f0c-cd21-3e3639224e13 
│ └─md1 swap                                519479d1-6b8b-486a-a8a9-c864cb4a2a21  [SWAP]
└─sdd3  linux_raid_member de.org:2 08358a46-e38b-3460-a5e8-36fbe6845bbc 
  └─md2 LVM2_member                          4aNKmU-snmD-9Qz6-SdQ6-4FK4-1K3G-WvsXGE

/dev/md2 is lvm device

Code:

pvs  Physical Volume Label
pvs  =====================
pvs  PV        VG    Fmt  Attr PSize  PFree
pvs  /dev/md2  vg0 lvm2 a--  <3.46t    0

vgs  Volume Group
vgs  ============
vgs  VG    #PV #LV #SN Attr  VSize  VFree
vgs  vg0  1  1  0 wz--n- <3.46t    0

lvs  Logical Volume
lvs  ==============
lvs  LV      VG    Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
lvs  qcow2  vg0 -wi-a----- <3.46t


MensaWater 05-17-2019 09:32 AM

I think tyler2016 is on the right track. LVM typically starts on boot before the fileystem checks. If your other PVs/VGs/LVs are coming up after reboot that suggest it is starting and finding those OK. If it is not finding this one automatically it suggests there is something else starting later in Systemd that makes it available so that the manual pvscan finds it.

I've seen setup on some 3rd party flash cards like those from IODrive (now SanDISK) where it has its own files to activate the card that also allow you to do the start of the LVM structures laid out on that card so it isn't found by init scripts or systemd until later in the boot.

What is this particular PV's hardware? Is it different than hardware for other PVs that are being found automatically at boot?

krutik 05-17-2019 09:50 AM

Quote:

Originally Posted by MensaWater (Post 5996109)
What is this particular PV's hardware? Is it different than hardware for other PVs that are being found automatically at boot?

The fact is that on this server all PVs/VGs/LVs do not start automatically. My volumes auto started on other servers that have the identical hardware configurations and settings, but this sevrer have only one difference... For this server, the root file system was migrated to another server. Migrated via
Code:

rsync -avxHAX --exclude=tmp/* --exclude=dev/* --exclude=proc/* --exclude=sys/* --exclude=run/* --exclude=mnt/* --exclude=media/* --exclude=lost+found  --exclude=var/backups/*  --exclude=var/tmp/* --exclude=var/run/* --exclude=var/log/*.gz --exclude=var/log/*.1 root@{{ hostvars[node_template].ansible_host }}:/ /mnt/md0
I understand that the problem in the migration, but I cannot understand where. Because I see a completely identical configuration with other servers but with problem in autostart volumes.

MensaWater 05-17-2019 10:24 AM

Ouch. I'd not have done it that way. I'd have used something like Mondo to create a bootable backup of the original setup then used it to install the new one. Do you still have the original to do that from?

If not, had you already done a base install on the new system for the boot setup? Have you examined grub setup? Have you examined lvm.conf? It may be things like disk UUIDs and or other structures aren't appreciated by your new system as they're relevant to the old one. It sounds like you're getting it to boot. You might try to boot into single user, save a copy of lvm.conf then do vgscan and vgimport to create a new one.

tyler2016 05-17-2019 10:32 AM

Quote:

Originally Posted by MensaWater (Post 5996126)
Ouch. I'd not have done it that way. I'd have used something like Mondo to create a bootable backup of the original setup then used it to install the new one. Do you still have the original to do that from?

If not, had you already done a base install on the new system for the boot setup? Have you examined grub setup? Have you examined lvm.conf? It may be things like disk UUIDs and or other structures aren't appreciated by your new system as they're relevant to the old one. It sounds like you're getting it to boot. You might try to boot into single user, save a copy of lvm.conf then do vgscan and vgimport to create a new one.

I agree. I think your physical volumes have UUIDs that are different from the files in /etc/lvm.

krutik 06-12-2019 07:10 AM

It was long, difficult and painful, but I found the problem.

By default, the rd.md.uuid sections are specified in /etc/default/grub, for example:
Code:

GRUB_CMDLINE_LINUX="audit=1 crashkernel=auto rd.md.uuid=37e1ee29:255a03e4:cfbbe95f:84466f02 rd.md.uuid=e6dcf2cd:6423071d:63fbf6ae:cb16e95f biosdevname=0 net.ifnames=0 rhgb quiet"
But I decided to unify and make it easier by specifying one rd.auto=1 option instead of the rd.md.uuid options. After enabling this option, the LVM partition is no longer activated. After disabling this option and specifying explicit UUID in rd.md.uuid options for each partiotions then it all worked.

I did not understand the exact reason for such work, but the main thing is that the problem has been solved, and I will go on to read the manuals :)


All times are GMT -5. The time now is 03:04 AM.