Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a multiboot , all Linux system with variety of OS issues , so I am trying to consolidate to ONE OS without loosing my data.
One of the issues is my inability to consistently run GRUB "ubuntu" menu.
Here are my assumptions / observations /opinion :
There should be only one GRUB and it is a result of UEFI "boot process".
I do not want to monkey with my UEFI , been there , done that .BECAUSE when it works it works without manual interaction as expected.
Hence my UEFI "boots" contains three "ubuntu" files and ONE of them usually works as expected.
The "problem" is - when my system fails to use ANY of these "Ubuntu" files it "boots" directly into GRUB "editing" - not into "ubuntu" menu .
Here is a title / header of this GRUB
GNU GRUB version 2.02~beta 2 36 ubuntu 3.29
Then it instruct that "minimal BASH like editing " is enabled.
I have no idea what to edit, so I just type "reboot" and hope to open "normal" working "Ubuntu" GRUB.
I do not even where is this file and how to read the content of it.
It is my understanding that UEFI boot process will attempt to run another GRUB boot AFTER it fails. I really do not want to do anything manually to make UEFI to run the GRUB menu.
My questions are
-
what constitute "ubuntu" UEFI GRUB (menu) failure?
what can I do to recover / edit the actual "GRUB BASH like file" to make it boot?
Keep in mind my objective, however, at present I do not have sufficient hardware to build clean Ubuntu OS, just like to be able to reliably get into UEFI build "ubuntu" GRUB menu.
GRUB is a tangled nightmare and I'm not surprised you are having problems with it, especially on a multiboot system. Ideally you should install, manage and update GRUB out of one OS only, but that isn't always possible.
The way GRUB works on a UEFI system is basically as follows:
1) The UEFI boots grub64.efi, which is located on the EFI system partition.
2) GRUB goes to the partition it has been given as its "root". This should be the root partition of the OS from which it was installed. You can find out what it is by looking in /etc/default/grub on that system. That's a very useful file btw as it contains a lot of GRUB parameters.
3) On that partition, there should be a directory called /boot/grub.d, where GRUB expects to find its loadable modules. GRUB is basically a miniaturised Linux kernel and loads modules in the same way as the kernel does.
4) GRUB loads its shell module. If it can't find it, it gives the "GRUB rescue>" prompt and halts.
5) Having loaded the shell, it looks for and reads grub.cfg, which is a file written in GRUB's shell language, vaguely similar to bash. By obeying the commands in this file, GRUB can produce a menu for the user. If it can't find a valid grub.cfg file, it gives the "GRUB>" prompt and halts.
So it looks as if something is wrong with your grub.cfg. You could try fixing it by running the update-grub command from the Ubuntu you are using as your GRUB management system, which should renew the file.
There are also specific commands which you can enter in response to the GRUB prompt to choose a partition, list the boot directory on it and choose a kernel to boot. I can't remember them offhand but they are in the GRUB manual.
There should be only one GRUB and it is a result of UEFI "boot process".
Each Linux OS you have installed likely has Grub installed but only one can be first and that is the one that is set first in your BIOS firmware. Run the following command in Ubuntu to get information on what these entries are and post the output here: sudo efibootmgr
This will output the entries you have and show the boot order. You should be able to change the order with efibootmgr. If that fails, you will have to do it in the BIOS firmware.
What do you mean by one of the ubuntu entries usually works as expected?
Have you tried booting from the BIOS firmware and selecting the ubuntu entries to see which works. Make a note of the boot entry number which works, usually something like: Boot 0001.
BUT the "boot order" and "bootcurret" appears to be correct - using "ubuntu" "F".
How did you arrive at that conclusion ?. Check the boot order again.
"efibootmgr -v" will give you an idea of which entry is which - this is what happens when you install multiple Ubuntu derivatives; say Mint. Boot into your chosen system, run update-grub as suggested above. This will (normally) update the default entry (first in BootOrder), else you can use to efibootmgr to effect the same. See the manpage.
Edit: update-grub won't do it, you'd need grub-install to update the EFI variables - use efibootmgr.
Ideally you should install, manage and update GRUB out of one OS only, but that isn't always possible.
Once understood, it is easy to get close enough....
Quote:
by looking in /etc/default/grub on that system. That's a very useful file btw as it contains a lot of GRUB parameters.
In multiboot, of particular importance is the GRUB_DISTRIBUTOR= value. Each OS needs a value unique to it. Several operating systems use a value that results in "ubuntu". That's a big cause such trouble as OP describes.
Quote:
4) GRUB loads its shell module. If it can't find it, it gives the "GRUB rescue>" prompt and halts.
5) Having loaded the shell, it looks for and reads grub.cfg, which is a file written in GRUB's shell language, vaguely similar to bash. By obeying the commands in this file, GRUB can produce a menu for the user. If it can't find a valid grub.cfg file, it gives the "GRUB>" prompt and halts.
It's a shell. If it didn't find grub.cfg, it's waiting for someone to give it commands, just like bash, or an X terminal. It's not the same kind of halt as from a kernel oops or segfault. If it does find a valid and relevant grub.cfg, it executes commands in it based upon which menu selection you make or edit, and may time out and use a default selection to attempt to boot.
Quote:
There are also specific commands which you can enter in response to the GRUB prompt to choose a partition, list the boot directory on it and choose a kernel to boot. I can't remember them offhand but they are in the GRUB manual.
They're also in grub.cfg, and the shell's built in help. If you have an accessible backup of it somewhere, you can type the relevant (minimal) portions of it at the prompt to proceed to boot.
Back to "close enough": After the first installation, for each additional installation, try to not install any bootloader. If successful, you can then boot the system controlling the original bootloader and have it automatically add the new installation to its menu by updating Grub without GRUB_DISABLE_OS_PROBER="true" present in /etc/default/grub. If you can't manage to avoid installing an additional bootloader, then a few steps booted to the new installation should return control to the original:
edit /etc/fstab to eliminate mounting of the ESP partition to /boot/efi/. It really doesn't need to be mounted at all. If you want it mounted, mount it somewhere else. If the distro's Grub management system cannot find the ESP partition, it can't mess up its operation from your choice of distro managing it.
edit /etc/default/grub to make GRUB_DISTRIBUTOR= a unique string, e.g. ubuntu2004, mint20, mageia8, suse152 or rawhide.
use efibootmgr or the BIOS to restore boot priority to the original Grub that you wish to remain in control.
optionally: uninstall all the bootloader packages for the additional distro.
A BIOS that adds a new entry every time you boot from external media will eventually exhaust data space in the BIOS. Use efibootmgr to delete all these, and check to see if the PC or motherboard maker has a BIOS upgrade available. If it does, install it. The ubuntu threesome there is likely the result of what I described in my previous reply about GRUB_DISTRIBUTOR=.
Quote:
I assume I have multiple efibootmgr ?
Each OS that installs Grub-efi will have its own efibootmgr, but all work the same AFAICT.
Quote:
How do I identify each one of these "ubuntu" "files ??"
So if I have multiple "ubuntu" how do I know "who is on first"?
Fix all the GRUB_DISTRIBUTOR=s and get their Grubs reinstalled. Then proceed with my previously described procedure.
Quote:
It is pretty simple for single OS , the "problem" is mutiboot.
Even in multiboot it's simpler than using Grub with legacy BIOS booting once you get the hang of it.
So if I have multiple "ubuntu" how do I know "who is on first"?
From a terminal, run this command: sudo fdisk -l This will show the drives and partitions. One of them should show as EFI system, make a note of the device partition. Create a mount point using something like: sudo mkdir /mnt/efi then mount it using the /dev/partition you got for efi in fdisk command, example: sudo mount /dev/sda1 /mnt/efi
You would need to replace 'sda1' in the above command with the actual partition for efi from fdisk. In the EFI directory, you should see a sub-directory named ubuntu and in that directory, there will be a grub.cfg file. Below is the first line of a sample entry from that file and the 32 characters after uuid are the UUID from the primary Ubunt
Run sudo blkid from Ubuntu to determine which device has the uuid from grub.cfg. As pointed out above, the various Ubuntu distributions when installed on the same computer will overwrite this info in EFI/ubuntu but they don't change the UEFI entries in the BIOS firmware which is why you have several entries for ubuntu there.
Quote:
Boot000F* ubuntu
If the above entry always works for Ubuntu, that's the one you want. You should be able to use efibootmgr to both change the boot order in the UEFI settings as well as to delete unused entries. If that fails, it you can do it in the BIOS UEFI firmware settings.
The link below has a pretty detailed explanation on doing these things using efibootmgr.
Not sure why you have all those Seagate entries in the boot order. I see similar when and only when I have an external Seagate drive attached.
Before I read the last post I got brave / stupid and moved the working "ubuntu" , the third one, from 3rd place to first place . In my UEFI setup - "boot sequence" .
The result - now UEFI "runs" normal "grub" menu and after I selected 'sdc8' my system boots normally.
By normally I mean it will open "grub" menu and on timeout and runs the option which was last selected and run.
I think I screwed up and selected , again in UEFI "boot sequence", wrong "grub" but as far as UEFI setup was concerned it was OK and that is why I kept getting into the "grub" and had an option to edit it instead of running it.
This setup will do for a while until I get another drive and clean things up.
I would like to get some comments on following :
As a user, first thing I see is ASROCk splash screen / logo with options to get into UEFI setup, select boot device or let UEFI ( what ?) run thru "boot devices" and in now case select working "ubuntu" ( still do not know what to call this file - name / function ?)
'
Next thing the user sees is "grub menu" . On messy system likes mine - I cannot , from the menu, tell which one of the EFI files ( "ubuntu " which device / partition ) is being run. Now ideally no matter how many OSs are on the system, there should be only ONE device EFI partition begin run.
I am still unclear how UEFI setup obtains all of mine "boot" partitions , however, that seems to be what caused this mess in first place.
Yes, that is where all those "seagate" comes from - there is definite conflict between normal - EFI boot partition - and OS partitions on SAME /dev/x.
The "problem" is - if I select any of these drives - UEFI is happy and will pick them up on next boot -- and this mess starts over again.
As a user, first thing I see is ASROCk splash screen / logo with options to get into UEFI setup, select boot device or let UEFI ( what ?) run thru "boot devices" and in now case select working "ubuntu" ( still do not know what to call this file - name / function ?)
I can only guess that your ASRock UEFI BIOS is confused by multiple ubuntu entries. I've not had this happen to me since many moons ago I learned to use GRUB_DISTRIBUTOR= to make each entry in the ESP partition unique.
Quote:
Next thing the user sees is "grub menu" . On messy system likes mine - I cannot , from the menu, tell which one of the EFI files ( "ubuntu " which device / partition ) is being run. Now ideally no matter how many OSs are on the system, there should be only ONE device EFI partition begin run.
Of course they should, but because each OS thinks it's to be the only OS, or at least, the one in control, it takes some work to get there. Compare the UUID in /etc/fstab for the running system to those you find in efibootmgr -v output should ID that used for the current boot.
Quote:
I am still unclear how UEFI setup obtains all of mine "boot" partitions , however, that seems to be what caused this mess in first place.
The UEFI BIOS scans partition tables each boot, and stuffs new bootables into its memory.
Quote:
Yes, that is where all those "seagate" comes from - there is definite conflict between normal - EFI boot partition - and OS partitions on SAME /dev/x.
If they are USB connected, turn their power off or otherwise disconnect them before booting will keep the UEFI BIOS scan at POST from occurring.
Quote:
The "problem" is - if I select any of these drives - UEFI is happy and will pick them up on next boot -- and this mess starts over again.
Using GRUB_DISTRIBUTOR= for every installation will mitigate this, except for the first boot of a new installation that didn't allow you to make its GRUB_DISTRIBUTOR= unique before the installation process completes.
What follows you might consider using as a template for setting yours up. Note I've removed all-1 entries from the ESP partition. In addition, I've removed from all-1 installations its entry in /etc/fstab for mounting the ESP partition, and on some, I've uninstalled Grub. Thus, there's only one configured in the UEFI BIOS, and only one able to manipulate the content in the ESP partition, making that one the one in control.
Code:
# inxi -My
Machine:
Type: Desktop System: ASUS product: All Series v: N/A serial: N/A
Mobo: ASUSTeK model: B85M-E v: Rev X.0x serial: xxxxxxxx
UEFI: American Megatrends v: 3602 date: 04/04/2018
# efibootmgr
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0000,0003,0002,0001
Boot0000* opensusetw
Boot0001* Hard Drive
Boot0002* UEFI OS
Boot0003* CD/DVD Drive
# ls -Gg /boot/efi/EFI/
total 8
drwxr-xr-x 2 4096 Apr 26 2020 BOOT
drwxr-xr-x 2 4096 Jun 21 2018 opensusetw
# parted -l
Model: ATA TEAM T253X2256G (scsi)
Disk /dev/sda: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 337MB 336MB fat32 TG1P01 EFI System (ESP) T253X 2295 boot, esp
2 337MB 1885MB 1549MB linux-swap(v1) TG1P02 Linux Swap swap
3 1885MB 2305MB 419MB ext2 TG1P03 Linux reservation
4 2305MB 6499MB 4194MB ext4 TG1P04 /usr/local
5 6499MB 13.2GB 6711MB ext4 TG1P05 /home
6 13.2GB 26.4GB 13.2GB ext4 TG1P06 /pub
7 26.4GB 34.8GB 8389MB ext4 TG1P07 openSUSE Tumbleweed
8 34.8GB 43.2GB 8389MB ext4 TG1P08 openSUSE 152
9 43.2GB 51.6GB 8389MB ext4 TG1P09 Debian 11 Bullseye
10 51.6GB 60.0GB 8389MB ext4 TG1P10 Mageia 8
11 60.0GB 68.4GB 8389MB ext4 TG1P11 Tubuntu 2004 Focal
12 68.4GB 76.8GB 8389MB ext4 TG1P12 openSUSE 153
13 76.8GB 85.1GB 8389MB ext4 TG1P13 Fedora 33
14 85.1GB 93.5GB 8389MB ext4 TG1P14 Fedora 34
15 93.5GB 102GB 8389MB ext4 TG1P15 Linux Mint
16 102GB 110GB 8389MB ext4 New: Linux Data
17 253GB 256GB 3146MB ext3 TG1 fedovar
Boot into the os you want to control the grub bootloader,
Code:
sudo grub-install
sudo update-grub
Without more, that can be expected to be undone when next a new kernel is installed during ordinary updates or upgrade by one of the other installations.
Without more, that can be expected to be undone when next a new kernel is installed during ordinary updates or upgrade by one of the other installations.
Yes, good point. I have noticed that is happening at the end of new install of most packages.
Makes it little harder to identify the guilty party when things do not go as expected.
Especially when "new UNKNOWN / UNPROVEN software" is begin installed.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.