SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Since it looks like LXC won't be updated for 15.0, I changed my package building system from LXC to qemu/kvm. A particular advantage (over a change to VBox) is the ability to specify an "external" kernel & initrd i.e. a kernel and initrd not part of the VM image itself, rather just files in the host filesystem. This brings with it the ability to pass kernel arguments (such as a list of packages to build) when the VM image is launched via an -append entry e.g.
When the VM runs, it extracts the package list from /proc/cmdline, builds whatever is needed and dumps the result in a known location in the host file system.
That has all worked quite well for a few months now but recent discussion about secure boot led me to revisit booting these VM's with UEFI. Not that there's a particular need; I'm just interested. So I've installed the system/ovmf SBo package (actually a 2021 version, not the 2019 version specified in the repo) and following the required day of googling & fiddling around, I'm now able to boot the VM's under UEFI provided they use the internal kernel & initrd i.e. those in the VM image itself, not the external ones. Therefore my ability to pass kernel arguments no longer exists.
So, hoping someone has passed this way before, the question is how to pass extra information to a VM when it boots under UEFI?
Thanks for any suggestions,
chris
Last edited by chris.willing; 07-03-2021 at 09:23 PM.
Thanks for looking at it Didier. That (using kernel & initrd located in regular /boot) doesn't work for me. I suspect it's because if I specify root=/dev/sda2 (as it is in the VM) then I see an error that /dev/sda2 doesn't exist. Although it's true that there is no /dev/sda2 in the host system, I want the VM to mount /dev/sda2 of the VM, not the host.
More info:
all the errors about not mounting /dev/sd* are because no disk partitions at all are visible in the environment that I'm dumped to when booting fails. The initrd has run because I see modules loading after "Loading kernel modules from initrd image" and running "ls -l" in the resulting shell shows the contents the initrd.
/proc/partitions contains only ram0-ram15 and fd0. Why no disk partitions available I wonder?
This document is as dry as burnt toast, but it does contain
Quote:
The motivation for this type of device path matching / completion is to
allow the user to move around the hard drive (for example, to plug a
controller in a different PCI slot, or to expose the block device on a
different iSCSI path) and still enable the firmware to find the hard
drive.
The UEFI specification says,
9.3.6 Media Device Path
9.3.6.1 Hard Drive
[...] Section 3.1.2 defines special rules for processing the Hard
Drive Media Device Path. These special rules enable a disk's location
to change and still have the system boot from the disk. [...]
3.1.2 Load Option Processing
[...] The boot manager must [...] support booting from a short-form
device path that starts with the first element being a hard drive
media device path [...]. The boot manager must use the GUID or
signature and partition number in the hard drive device path to match
it to a device in the system. If the drive supports the GPT
partitioning scheme the GUID in the hard drive media device path is
compared with the UniquePartitionGuid field of the GUID Partition
Entry [...]. If the drive supports the PC-AT MBR scheme the signature
in the hard drive media device path is compared with the
UniqueMBRSignature in the Legacy Master Boot Record [...]. If a
signature match is made, then the partition number must also be
matched. The hard drive device path can be appended to the matching
hardware device path and normal boot behavior can then be used. If
more than one device matches the hard drive device path, the boot
manager will pick one arbitrarily. Thus the operating system must
ensure the uniqueness of the signatures on hard drives to guarantee
deterministic boot behavior.
/proc/partitions contains only ram0-ram15 and fd0. Why no disk partitions available I wonder?
Maybe because the script /init in the in initrd didn't run udev for some reason although that looks weird (I won't assume an issue with the SD SCSI disk driver). Did you try to set as argument of the -initrd option a copy of the initrd you installed in the VM?
I attach the init script I include in the initrds (same as in Slackware only support for a LUKS key in the initrd added).
PS Else you also could use as argument of -kernel a "huge" kernel (the -append option only needs to also set the -kernel option, the -initrd option is optional).
PPS You can also include in the init script commands helping to investigate like "echo" for the values of variables and "read" or "sleep'" to give you time to read them. Been there, did that.
Last edited by Didier Spaier; 07-06-2021 at 06:45 AM.
Yes, I tried that when I thought there could be confusion between device paths in the VM and the host (before I realized that no device paths at all from either VM or host were being discovered).
Quote:
Originally Posted by Didier Spaier
Maybe because the script /init in the in initrd didn't run udev for some reason although that looks weird (I won't assume an issue with the SD SCSI disk driver). Did you try to set as argument of the -initrd option a copy of the initrd you installed in the VM?
Yes, when I updated the VM's kernel and initrd for 5.12.14, I copied them into the host file system. It's only when I try to run the VM with those copies in the host that the problem occurs. They seem to run (I can see modules from initrd being installed) but they don't seem to access the VM itself.
Quote:
I attach the init script I include in the initrds (same as in Slackware only support for a LUKS key in the initrd added).
I installed it in a new initrd and saw (when dumped in the shell after finding no devices) that it's quite similar to the init already there. I tried a few of the udev commands in there - even re-ran the whole script - but no change
Quote:
PS Else you also could use as argument of -kernel a "huge" kernel (the -append option only needs to also set the -kernel option, the -initrd option is optional).
I tried the huge kernel without initrd but it had a panic with message
Code:
not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
. Running huge kernel with the initrd was almost same as generic + initrd; just modules couldn't be loaded (already in kernel).
Quote:
PPS You can also include in the init script commands helping to investigate like "echo" for the values of variables and "read" or "sleep'" to give you time to read them. Been there, did that.
Well I think the problem is before the init script is run (huge kernel's message about unknown-block(0,0)) but was surprised that dmesg and less commands are supported in the initrd shell. Unfortunately "dmesg|less" showed no errors until initrd's init ran almost to the end until "No /sbin/init found on rootdev (or not mounted). Trouble ahead."
Shot in the dark: in Slint the initrd and kernel are not copied in the ESP, as GRUB's OS loader includes the needed file system modules to find them in /boot. Could that make a difference?
PS if everything else fails, install Slint to compare. If you choose the 'auto' mode this won't take more than 20 minutes (preferably when connected to the Internet so you won't have to upgrade or get new packages after installation). Qemu is included. GPG-KEY
Last edited by Didier Spaier; 07-06-2021 at 10:10 AM.
Shot in the dark: in Slint the initrd and kernel are not copied in the ESP, as GRUB's OS loader includes the needed file system modules to find them in /boot. Could that make a difference?
In this instance, I'm not sure (although I just tried it and it made no difference). In any case, the whole point of this particular exercise is to boot with kernel & initrd which are not contained in the VM itself. Or do you mean ESP of host system?
Quote:
PS if everything else fails, install Slint to compare.
A reinstalation made me wonder how you set up your system - did you convert an existing VM to use UEFI, or did you create the VM under UEFI (OVMF)?
chris
Last edited by chris.willing; 07-08-2021 at 05:05 AM.
In this instance, I'm not sure (although I just tried it and it made no difference). In any case, the whole point of this particular exercise is to boot with kernel & initrd which are not contained in the VM itself. Or do you mean ESP of host system?
I had in mind the host system, but maybe the VM matters too, although I don't know why it would. Just trying to find something that differs between our contexts. Anyway later today I will try with host Slint and VM Slackware as well as host Slackware and VM Slint.
So that I can reproduce please tell what is your host (Slackware64-current? Slackware64-14.2?) and how it boots: elilo? lilo? grub? In EFI or Legacy mode? And does the VM boots using elilo or grub?
Quote:
Originally Posted by chris.willing
A reinstallation made me wonder how you set up your system - did you convert an existing VM to use UEFI, or did you create the VM under UEFI (OVMF)?
Well, I just change the virtual firmware whenever needed, including or not these lines in the command:
In my understanding OVMF_CODE-pure-efi.fd is the firmware itself, my_uefi_vars.bin stores the EFI variables, like the boot entries of the firmware. This illustrated by the fact that if I change the virtual disk the firmware's boot menu remains as before (displayed with the Qemu option -boot-menu=on, of course).
Last edited by Didier Spaier; 07-08-2021 at 06:35 AM.
So that I can reproduce please tell what is your host (Slackware64-current? Slackware64-14.2?) and how it boots: elilo? lilo? grub? In EFI or Legacy mode? And does the VM boots using elilo or grub?
Thanks for taking the time. My host Slackware64-current boots with grub in UEFI mode. The VM boots with grub although it was generally not previously used since I was booting from external kernel/initrd. It's only since adding the ovmf lines to the qemu script that it won't boot with external kernel/initrd (but will boot via VM's grub if I remove kernel/append/initrd lines from the script.
Quote:
Well, I just change the virtual firmware whenever needed, including or not these lines in the command:
In my understanding OVMF_CODE-pure-efi.fd is the firmware itself, my_uefi_vars.bin stores the EFI variables, like the boot entries of the firmware. This illustrated by the fact that if I change the virtual disk the firmware's boot menu remains as before.
Install Slackware64-current (ISO built from an up to date local mirror) in a Qemu VM, host Slint64-14.2.1 up to date, Qemu version 6.0.0, install neither lilo nor elilo
Start Slackware using the last entry of the GRUB menu "detect and start all OS" (yes I forgot to chroot to install GRUB before rebooting the VM), ran grub-install && grub-mkconfig -o /boot/grub/grub.cfg
Success: Slackware is running the external kernel (internal version is 5.12.15), and cat /proc/cmdline gives:
Code:
root=/dev/sda2 ro TITI=toto
Next, I will build Qemu in Slackware-current and experiment using the same VM but in a Slackware64-current host. Stay tuned.
The root partition is /dev/sda2 in the VM, /dev/sda3 in the host.
EDIT.
Also tried to run the script on an up to date Slackware64-current host (only changed the kernel version, now 5.12.15). Success.
In Slackware I rename the initrd built running geninitrd as initrd-generic-<kernel version> to make os-prober happy, but I doubt that matters. Do you use the same Qemu version (6.0.0)? Else you will find a package just built on Slackware64-current here.
Last edited by Didier Spaier; 07-08-2021 at 05:32 PM.
Reason: EDIT added.
Install Slackware64-current (ISO built from an up to date local mirror) in a Qemu VM, host Slint64-14.2.1 up to date, Qemu version 6.0.0, install neither lilo nor elilo
Start Slackware using the last entry of the GRUB menu "detect and start all OS" (yes I forgot to chroot to install GRUB before rebooting the VM), ran grub-install && grub-mkconfig -o /boot/grub/grub.cfg
A plain "grub-install", or "grub-install --efi-directory= ... ..."? i.e. without or with an ESP in the VM?
A plain "grub-install", or "grub-install --efi-directory= ... ..."? i.e. without or with an ESP in the VM?
A plain grub-install. As there is a mounted ESP in the VM GRUB automatically selects the x86_64-efi platform, puts the OS loader in the ESP and by default writes a boot entry in the firmware's boot menu.
PS I edited my previous post after you answered it.
Last edited by Didier Spaier; 07-08-2021 at 05:53 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.