eliloconfig not working, part2
For such a reltively simple process I seem to manage to find all the possible things that can go wrong. I've successfully converted 3 CSM/MBR systems to UEFI now I'm doing a 4th and failing. The difference is this one is a RAID-1.
I've changed the BIOS to boot UEFI and booted the Slackware 15.0 DVD. Then: Code:
mount /dev/md2 /mnt [code] chooser=simple delay=1 timeout=1 # image=vmlinuz label=vmlinuz read-only append="root=/dev/md2 vga=normal ro" [/dev] vmlinuz is the same as /boot/vmlinuz-huge-5.15.145 The /etc/fstab is Code:
/dev/md0 /boot/efi vfat defaults 1 0 I've come across a couple of posts that mention mdraid, but can't find any more on that. Is there a solution for this? |
basically, bios can't understand the MD raid array. so when you put the boot and all that stuff in the md0 array, bios can't read that partition and because BIOS can't read the efi partition, it will fallback to mbr boot.
to using EFI boot with md array, you need to create a small normal partition on the disk and using it as EFI partition. BIOS should be able to read that partition and it should find the elilo and start from that. that is how proxmox boot with a zfs root filesystem and i learned the hard way when i replace the boot disk. |
Quote:
Code:
Device Start End Sectors Size Type How would this work to boot the RAID? I'll investigate proxmox. I've never heard of that. |
@mfoley , What I did was create partitons somewhat like you suggest in your above post .
The below is a protion of /etc/fstab as an example . I always use partuuid for mounting that has saved an enormous amount of difficulties . The nvme[01]n1p1 partitions are on the ROOT drive and I use nvme0n1p1's partuuid as the primary boot . I still have to determine how to use the second nvme1n1p1's partition as a failover . But I remember seeing someone doing this on a spinning media drive with md raid1 for the other partitions . Highly suggest to make the EFI partition the First & lowest numbered sector . Hth , JimL Code:
# cat /etc/fstab |
Quote:
You will need to setup /etc/fstab to mount sda1 as /boot/efi To setup the boot, you need to re-generate the initrd.gz to add the mdraid then you copy the kernel, initrd.gz to the EFI partition (this step should be done automatically by elilo script), update the elilo config to point to the kernel, initrd that you are copied >How would this work to boot the RAID? When you boot, the script inside initrd.gz will find and mount MDRaid array. |
Thanks to both babtdr and nhattu1986. I also found a link that looks more complex than either of your suggestions, but maybe not in practice:
https://unix.stackexchange.com/quest...tion-on-debian. I'm going to hang onto these suggestions for future reference. The Linux host I'm working on is a production machine hosting the office's Virtual Machine Window 10 guest which is the very important SQL Server database server. To try these suggestion will require some experimentation on my part and I don't have a suitable test machine handy. Currently, the Linux host boots CSM/MBR and the Windows VM does as well. What is needed is for the VM to boot UEFI so it can be upgraded to Windows 11. According to what I've read, the VirtualBox VM guest does not need the host hardware to boot UEFI; it can be configured to boot UEFI regardless of what the host has configured. I'm going to first experiment with that on the production setup. If that fails I can always image-restore the VM back to its current CSM/MBR state. I have no pressing desire to make all my Linux computers boot UEFI. Even on those I have converted to UEFI I do not enable secure boot. I don't see any advantage to Linux for UEFI unless and until motherboards stop offering CSM/MBR as a BIOS option. One thing that concerns me is if any of your implementations will permit boot-failover to the mdb member if the mda member fails. I know from experience this works with CSM/MBR, but does it work with this UEFI setup? On rare occasions when one of my RAID members have failed (I have 4 servers with RAIDs), I can easily hot-swap out and replace the failed member and users can keep working and are none the wiser. A test platform can answer this question. What I'll do in about a week is attempt to update the VM to UEFI and upgrade to Windows 11. I'll post back my results. |
About the VM UEFI, yes, you don't need the host to UEFI boot to using UEFI on the guest.
Instead of messing with the current production VM, you can tried to create a new VM or clone the existing one then turn on UEFI and then messing with it, if it failed, it failed, no need to restore or revert and no downtime until you are confident enough to do it for real. >boot-failover to the mdb member if the mda member fails this is very simple, since your drive is already having mirror partition layout, you can just simply create the same EFI partition on the rest of the member disk of the md array. then you can simply using efibootmgr and add all those elilo to the EFI boot menu, if the first member failed, bios will try the next one. the caveat is that every time you update the kernel, you had to repeat the elilo update process for the remaining member disk of the array, which can be tedious if you are using -current. also this is how my proxmox handle the raid10 zfs root of 4 drives Code:
everytime i update kernel, promox will update every EFI partition. |
Thanks all for your help and suggestions. As it turned out, the Linux VM host does not need to have UEFI configured in the BIOS. Converting the Windows guest to UEFI, shutting down Windows, then setting the VirtualBox systems settings to UEFI works. TPM does have to be set in the BIOS. I will keep the proposed ideas on getting hardware UEFI to work with mdadm. I may need that one day.
|
All times are GMT -5. The time now is 11:09 PM. |