Grub is not reading its configuration file
Changes to my /boot/grub/grub.conf file are not being shown in the grub menu on a reboot.
I installed a new kernel (3.1.6), and I can't boot to it because it's not in the grub menu... I added it to grub.conf as I always do when I install a new kernel, but it's not shown as an option. My old options are still there, and they work as usual, but I can't change anything. For example, if I change the name of an existing line in grub.conf, that change is not reflected in the grub menu either (e.g., if I change "Windows XP Pro" to "TEST", it still shows up in grub as "Windows XP Pro"). Somehow, grub is loading the options for its menu, but it's definitely not getting them from the grub.conf file. Perhaps more distressing is the fact that, if I manually change a kernel line to boot the new kernel, it claims that the file isn't there (it is... I installed it the same way I installed every other kernel). I assume that's a symptom of the same problem. Anyone know why this could be happening? For your reference: Code:
$ ls -l /boot Code:
$ cat /boot/grub/grub.conf |
what is the output of
Code:
emerge -pv grub |
Code:
$ emerge -pv grub |
Did you use genkernel ? Or was the kernel manually configured ?
|
The symptoms suggest GRUB is using an old menu.lst.
AIUI (not sure and writing about legacy GRUB not GRUB2), when GRUB is installed to the MBR, GRUB stage 1.5 is installed in the space after the partition table including a fixed offset to the partition containing the file system containing the GRUB directory, /boot/grub. If that has changed and GRUB has not been re-installed, GRUB will continue to load from the old /boot/grub (which could be in empty space on the HDD). GRUB can be re-installed using grub-install. |
If you have more than one linux distro installed, make sure you are updating grub in the right one. If not, your changes will be ignored.
|
Quote:
Quote:
Quote:
|
I figured out what was happening, but I don't quite understand the behavior.
My /boot partition is a software raid1. But when I took a look at cat /proc/mdstat (mostly because I was in the "wild guessing" phase of diagnosis), only one of the two drives was listed in /dev/md1. It wasn't that one device had failed... it just wasn't there at all. I have no idea what happened. So I had to do Code:
# mdadm --add /dev/md1 /dev/sda2 So my questions now are as follows: - Why would a drive just get booted out of a raid array, and is there something I should have been doing to keep an eye on it? - Why would that cause grub to not read the contents of a file which, while on a broken array, still held the correct information? - Was it just trying to run grub from /dev/sda, which wasn't being written to because it wasn't in the array? If so, say that /dev/sda actually DOES fail at some point. The whole reason I made /boot a raid1 array was so that I'd still be able to boot if that happened. If it never falls back on /dev/sdb, there's no point. Thanks! |
All times are GMT -5. The time now is 05:23 AM. |