[SOLVED] kernel panic - custom kernel won't mount file system
I am working on a project were I need to use a 18.104.22.168 kernel so I installed a fresh copy of FC6 which works fine. I downloaded the 22.214.171.124 kernel and compiled it on my thinkpad x60 and everything installs fine. When I boot into the 126.96.36.199 kernel I get a kernel panic. Here is the relevant part of the output:
Scanning logical volumes
Reading all physical volumes. This may take a while...
No volume groups found
Activating logical volumes
Volume group "VolGroup00" not found
Trying to resume from /dev/VolGroup00/LogVol01
Unable to access resume device (/dev/VolGroup00/LogVol01)
Creating root device.
Mounting root filesystem.
mount: could not find filesystem '/dev/root'
Setting up other filesystems.
Setting up new root fs
setuproot: moving /dev failed: No such file or directory
no fstab.sys, mounting internal defaults
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
Switching to new root and running init.
unmounting old /dev
unmounting old /proc
unmounting old /sys
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
I have read many forums and post and this issue seems to often cause when there are changes to the hardware that the kernel is running on or when the system is using a raid configuration. This is not the case in my situation. I have compiled the kernel on the same hardware...here are the step I took for the 188.8.131.52 kernel compilation (as root):
tar xjf linux-184.108.40.206.tar.bz2
cp /boot/config-2.6.18-1.2798.fc6 .config
make menuconfig (added -custom string)
I checked the /boot/grub/grub.conf and /etc/fstab and boot match up. Here are the entries:
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
title Fedora Core (220.127.116.11-custom)
kernel /vmlinuz-18.104.22.168-custom ro root=/dev/VolGroup00/LogVol00 rhgb
title Fedora Core (2.6.18-1.2798.fc6)
kernel /vmlinuz-2.6.18-1.2798.fc6 ro root=/dev/VolGroup00/LogVol00 rhgb
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
I tried adding selinux=0 and enforcing=0 to the grub entry but get the same results with the kernel panic.
I have been stuck on this for what seems like too long so any help would be much appreciated. Thanks in advance.
Did you create the new initrd;
Thanks for the link...I do agree that it seems to be with the initrd.
So after reading the info provided and some other pages on initrd I tried the following:
1) I double checked that SCSI support was compiled in as a module and also checked Device Drivers -> Block Devices -> Loopback Device Support was also set as module support (in menuconfig)
2) I ran mkinitrd initrd-22.214.171.124_custom.img 126.96.36.199 and updated the grub entry
3) A friends suggested passing --with=scsi_mod to mkinitrd and separately trying --omit-scsi-modules
4) Then I recopied the .config file and recompiled and re-installed for good measure
5) Then I unpacked the working initrd image for the 2.6.18 kernel using
gunzip<initrd-2.6.18.img | cpio -i -d
and also unpacked the initrd image for the nonworking 188.8.131.52 in a separate temp directory and compared the init scripts at the root of both ramdisks.
I found that the 184.108.40.206 init isn't loading: scsi_mod, sd_mod, libata.ko, and ahci.ko so I copied these into the initrd and packed it back up using:
find ./ | cpio -H newc -o > initrd.cpio
mv initrd.cpio.gz initrd_custom.img
cp initrd_custom.img /boot/
Then I updated the grub entry to use the new initrd.
So far nothing has worked yet...still get the "No volume groups found" and then the kernel panic when switchroot tries to mount the file system.
comprookie2000: was a bit confused if you were suggesting that I build my own initrd from the ground up. I didn't try that cause I figured there was a working initrd for the 2.6.18 kernel so I don't see why the mkinitrd scripts aren't able to build an initrd that actually works.
Please do post more ideas if anyone has them.
I have only done it this way;
I compile everything needed to boot and mount the file systems directly into the kernel "*".
sorry can't be of much help with Fedora, but I am sure someone who does can help you out.
So I realized that even though I added the insmod command to the init script in the initrd before zipping it up I didn't copy the modules into the lib directory. So I just re-zipped it up and tried rebooting with no improvements to the situation...still get the No volume groups found and then a kernel panic.
So the only difference between the working kernel's initrd and the non-working kernel's initrd would be the later is missing libata.ko and ahci.ko which both of which don't seem to be in the /lib/modules/220.127.116.11/kernel/drivers/scsi/ directory so I am going to figure out which boxes to check to compile these modules and then add them to the initrd and give that a try...I hope it works (fingers crossed).
Ok I seemed to have it booting now (comprookie2000 thanks for the help).
For those of you out there with the same problem...So the issue was first of all the my laptop needed support for SCSI and SATA so the first thing to check is to make sure that you have compiled in support for your particular hard drive. What I pretty much did was to compile all drivers for SCSI and SATA in as modules. This might solve your problems right off the bat and running the usual make all, make modules_install, and make install could fix it up.
So then what I did was to compare the initrd ram disks. You can unzip them in a temporary directory and compare the init scripts at the root to see which modules are being loaded. I was of course missing the SCSI and SATA modules so I added the insmod command to load them to the init script and also copied them into the ./lib/ folder. Then I zipped the ram disk back up and renamed it to initrd_custom.img then copied that to /boot/ and updated the grub.conf entry to point to initrd_custom.img. Then everything booted up fine. One thing to note is that I had many iterations of zipping the ram disk and left the old images in the same folder so over time the image grew pretty large and after awhile it crashed with an error saying the ram disk was too large and I realized what I did...so remove the old image files before zipping up the ram disk.
Much thanks to comprookie2000 you got me moving in the right direction and your suggestion were right on. I really wasn't expecting a reply at all cause most the other forums with kernel panic posts don't seem to get many replies.
Glad you solved it. What I don't understand is, why do you get an error from the LVM system? If you'd care to elaborate I'd be grateful.
I am in no way a expert but what I would guess is that the init script in the initrd ram disk loads the modules right before the call to the LVM system. So the point that I get the kernel panic the output from the module loading has scrolled off the screen. So the LVM system is assuming the previous parts of the script have taken care of loading the necessary drivers for the hard drive. Since the modules weren't loaded the hard drive couldn't be access and the LVM system cannot access or find the logical volumes.
At this point in the boot process I believe the BIOS and grub system have found the initrd ram disk and loaded that into the on-board ram and the ram disk is then uncompressed and uses the on board ram to create a temporary file system which it uses to load the basics modules (such as SATA and SCSI in my case) then once the LVM system does it's job the real file system is loaded ("Setting up other filesystems. Setting up new root fs") then once that is done the temporary file system is removed (unmounting old...)
So long story short is that I think the real error is before LVM and the LVM is assuming everything prior has gone file.
On a side note another friends gave me more info:
I believe the problem stems from either mkinitrd not properly realizing that certain modules are currently loaded when you build, or from those modules actually not being loaded. My approach was to manually run mkinitrd manually, adding --with=scsi --with=scsi_md --with=sata_XX (whichever sata driver I needed) to make sure it included the appropriate modules. If you want to see how mkinitrd is called, you have to trace it back to /sbin/new-kernel-pkg, which is called from /sbin/installkernel, which itself is called from arch/i386/install.sh (in the kernel source tree).
Another way to make sure that you don't have to manually make the initrd is to add the requisite drivers to /etc/modprobe.conf and do a depmod. I think then that mkinitrd should automatically find them.
Okay, thanks. Obviously LVM does no testing of exit conditions of the previous processes -- though that might be a little difficult, what with the threading and parallelizing of the start up process.
Anyway, thanks for your answer.
|All times are GMT -5. The time now is 05:48 AM.|