LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   eliloconfig not working, part2 (https://www.linuxquestions.org/questions/slackware-14/eliloconfig-not-working-part2-4175734743/)

mfoley 03-10-2024 03:18 AM

eliloconfig not working, part2
 
For such a reltively simple process I seem to manage to find all the possible things that can go wrong. I've successfully converted 3 CSM/MBR systems to UEFI now I'm doing a 4th and failing. The difference is this one is a RAID-1.

I've changed the BIOS to boot UEFI and booted the Slackware 15.0 DVD. Then:
Code:

mount /dev/md2 /mnt
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sts /mnt/sys
chroot /mnt
mount /dev/md0 /boot/efi
eliloconfig

This creates the following /boot/efi/EFI/Slackware/elilo.conf:
[code]
chooser=simple
delay=1
timeout=1
#
image=vmlinuz
label=vmlinuz
read-only
append="root=/dev/md2 vga=normal ro"
[/dev]
vmlinuz is the same as /boot/vmlinuz-huge-5.15.145

The /etc/fstab is
Code:

/dev/md0        /boot/efi        vfat        defaults        1  0
/dev/md1        swap            swap        defaults        0  0
/dev/md2        /                ext4        defaults        1  1
#/dev/cdrom      /mnt/cdrom      auto        noauto,owner,ro,comment=x-gvfs-show 0  0
/dev/fd0        /mnt/floppy      auto        noauto,owner    0  0
devpts          /dev/pts        devpts      gid=5,mode=620  0  0
proc            /proc            proc        defaults        0  0
tmpfs            /dev/shm        tmpfs      nosuid,nodev,noexec 0  0

When I reboot, it revert to the BIOS setup screen. If I F8 while booting, the hard drive(s) don't show at all. I've read that elilo.conf should support most of the lilo.conf settings, but the only lilo setting I have related to RAID is "raid-extra-boot=mbr-only", which I doubt will do anything for elilo as that is to instruct lilo to write the MBR to all RAID members.

I've come across a couple of posts that mention mdraid, but can't find any more on that.

Is there a solution for this?

nhattu1986 03-10-2024 08:18 AM

basically, bios can't understand the MD raid array. so when you put the boot and all that stuff in the md0 array, bios can't read that partition and because BIOS can't read the efi partition, it will fallback to mbr boot.

to using EFI boot with md array, you need to create a small normal partition on the disk and using it as EFI partition. BIOS should be able to read that partition and it should find the elilo and start from that.

that is how proxmox boot with a zfs root filesystem and i learned the hard way when i replace the boot disk.

mfoley 03-10-2024 06:49 PM

Quote:

Originally Posted by nhattu1986 (Post 6488777)
basically, bios can't understand the MD raid array. so when you put the boot and all that stuff in the md0 array, bios can't read that partition and because BIOS can't read the efi partition, it will fallback to mbr boot.

to using EFI boot with md array, you need to create a small normal partition on the disk and using it as EFI partition. BIOS should be able to read that partition and it should find the elilo and start from that.

that is how proxmox boot with a zfs root filesystem and i learned the hard way when i replace the boot disk.

So, what are you saying, would it have a parttion table like:
Code:

Device        Start        End    Sectors  Size Type
/dev/sda1      ..          ..              ... Linux filesystem
/dev/sda2      2048  25167871  25165824  12G Linux swap
/dev/sda3  25167872  25577471    409600  200M EFI System
/dev/sda4  25577472 3907029134 3881451663  1.8T Linux filesystem

with sda2/3/4 as md0/1/2?

How would this work to boot the RAID?

I'll investigate proxmox. I've never heard of that.

babydr 03-10-2024 07:45 PM

@mfoley , What I did was create partitons somewhat like you suggest in your above post .
The below is a protion of /etc/fstab as an example .
I always use partuuid for mounting that has saved an enormous amount of difficulties .
The nvme[01]n1p1 partitions are on the ROOT drive and I use nvme0n1p1's partuuid as the primary boot .
I still have to determine how to use the second nvme1n1p1's partition as a failover . But I remember seeing someone doing this on a spinning media drive with md raid1 for the other partitions .
Highly suggest to make the EFI partition the First & lowest numbered sector .
Hth , JimL

Code:

# cat /etc/fstab
# /dev/md120p1 swap PAWSIFUE
PARTUUID=9b62bbd2-9d2c-436f-bba9-a0f5c45ac122  swap                      swap  defaults  0  0

# /dev/md121p1 ROOT TOOREMVN_0p1
PARTUUID=025abbdd-a4b5-4b02-beeb-91c60cf31824  /                        ext4  defaults  1  1

# /dev/nvme0n1p1  0IFEU
PARTUUID=de0ecafe-88be-4977-99b5-6d0e6fa4079a  /boot/efi                vfat  defaults  1  0

# /dev/nvme1n1p1  1IFEU
PARTUUID=212d463b-5bdd-463e-a183-3fb7adc996b8  /boot/bkup-efi            vfat  defaults  1  0

# This is on a seperate set of devices .
# /dev/md118p1  MyBIGDATA
PARTUUID=41110f51-ff28-044d-2ff5-84088772cb78  /home                    ext4  defaults  1  2


nhattu1986 03-10-2024 10:32 PM

Quote:

Originally Posted by mfoley (Post 6488887)
So, what are you saying, would it have a parttion table like:
Code:

Device        Start        End    Sectors  Size Type
/dev/sda1      ..          ..              ... Linux filesystem
/dev/sda2      2048  25167871  25165824  12G Linux swap
/dev/sda3  25167872  25577471    409600  200M EFI System
/dev/sda4  25577472 3907029134 3881451663  1.8T Linux filesystem

with sda2/3/4 as md0/1/2?

How would this work to boot the RAID?

I'll investigate proxmox. I've never heard of that.

Yes, that what i mean, you also need to format sda1 as FAT32 and change the type to EFI system, sda2/3/4 will belong to md0/1/2
You will need to setup /etc/fstab to mount sda1 as /boot/efi
To setup the boot, you need to re-generate the initrd.gz to add the mdraid then you copy the kernel, initrd.gz to the EFI partition (this step should be done automatically by elilo script), update the elilo config to point to the kernel, initrd that you are copied

>How would this work to boot the RAID?
When you boot, the script inside initrd.gz will find and mount MDRaid array.

mfoley 03-11-2024 12:23 AM

Thanks to both babtdr and nhattu1986. I also found a link that looks more complex than either of your suggestions, but maybe not in practice:
https://unix.stackexchange.com/quest...tion-on-debian. I'm going to hang onto these suggestions for future reference. The Linux host I'm working on is a production machine hosting the office's Virtual Machine Window 10 guest which is the very important SQL Server database server. To try these suggestion will require some experimentation on my part and I don't have a suitable test machine handy.

Currently, the Linux host boots CSM/MBR and the Windows VM does as well. What is needed is for the VM to boot UEFI so it can be upgraded to Windows 11. According to what I've read, the VirtualBox VM guest does not need the host hardware to boot UEFI; it can be configured to boot UEFI regardless of what the host has configured. I'm going to first experiment with that on the production setup. If that fails I can always image-restore the VM back to its current CSM/MBR state.

I have no pressing desire to make all my Linux computers boot UEFI. Even on those I have converted to UEFI I do not enable secure boot. I don't see any advantage to Linux for UEFI unless and until motherboards stop offering CSM/MBR as a BIOS option.

One thing that concerns me is if any of your implementations will permit boot-failover to the mdb member if the mda member fails. I know from experience this works with CSM/MBR, but does it work with this UEFI setup? On rare occasions when one of my RAID members have failed (I have 4 servers with RAIDs), I can easily hot-swap out and replace the failed member and users can keep working and are none the wiser. A test platform can answer this question.

What I'll do in about a week is attempt to update the VM to UEFI and upgrade to Windows 11. I'll post back my results.

nhattu1986 03-11-2024 06:24 AM

About the VM UEFI, yes, you don't need the host to UEFI boot to using UEFI on the guest.

Instead of messing with the current production VM, you can tried to create a new VM or clone the existing one then turn on UEFI and then messing with it, if it failed, it failed, no need to restore or revert and no downtime until you are confident enough to do it for real.

>boot-failover to the mdb member if the mda member fails
this is very simple, since your drive is already having mirror partition layout, you can just simply create the same EFI partition on the rest of the member disk of the md array.
then you can simply using efibootmgr and add all those elilo to the EFI boot menu, if the first member failed, bios will try the next one.
the caveat is that every time you update the kernel, you had to repeat the elilo update process for the remaining member disk of the array, which can be tedious if you are using -current.

also this is how my proxmox handle the raid10 zfs root of 4 drives

Code:


Disk /dev/sda: 37.26 GiB, 40007761920 bytes, 78140160 sectors
Disk model: FUJITSU MHW2040B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 39C42BC2-739E-4D6D-BFA3-E62D3F8D069A

Device      Start      End  Sectors  Size Type
/dev/sda1      34    2047    2014 1007K BIOS boot
/dev/sda2    2048  1050623  1048576  512M EFI System
/dev/sda3  1050624 77594624 76544001 36.5G Solaris /usr & Apple ZFS


Disk /dev/sdb: 37.26 GiB, 40007761920 bytes, 78140160 sectors
Disk model: FUJITSU MHV2040B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 62BB249B-9E55-44F5-9F41-699A67310C8E

Device      Start      End  Sectors  Size Type
/dev/sdb1      34    2047    2014 1007K BIOS boot
/dev/sdb2    2048  1050623  1048576  512M EFI System
/dev/sdb3  1050624 77594624 76544001 36.5G Solaris /usr & Apple ZFS


Disk /dev/sdc: 37.26 GiB, 40007761920 bytes, 78140160 sectors
Disk model: FUJITSU MHW2040B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1AA01D45-137C-4072-AE45-21215CB4A6E7

Device      Start      End  Sectors  Size Type
/dev/sdc1      34    2047    2014 1007K BIOS boot
/dev/sdc2    2048  1050623  1048576  512M EFI System
/dev/sdc3  1050624 77594624 76544001 36.5G Solaris /usr & Apple ZFS


Disk /dev/sdd: 37.26 GiB, 40007761920 bytes, 78140160 sectors
Disk model: FUJITSU MHV2040B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EE67950B-3334-4BCA-B1EF-D40659E045F0

you can see that, each member disk having their own EFI partition (sdX2) and the rest is managed by zfs
everytime i update kernel, promox will update every EFI partition.

mfoley 04-12-2024 08:24 PM

Thanks all for your help and suggestions. As it turned out, the Linux VM host does not need to have UEFI configured in the BIOS. Converting the Windows guest to UEFI, shutting down Windows, then setting the VirtualBox systems settings to UEFI works. TPM does have to be set in the BIOS. I will keep the proposed ideas on getting hardware UEFI to work with mdadm. I may need that one day.


All times are GMT -5. The time now is 11:09 PM.