Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Summary:
New build of CLFS attempting to boot for the first time (using host system's grub) results in: vfs unable to mount root fs on unknown-block(0,0)
Background:
I've rebuilt my pc with a new motherboard, more memory, and an extra 750g hard drive. The processor is a dual core AMD, and the mobo is an ECS A785GM-AD3. I created a raid 1 volume using 2 750g drives courtesy of the AMD SB785 chipset.
My plan was to install Windows, install my host Linux system, then build CLFS. Once I loaded the right AHCI drivers, I was able to get Windows 7 to see the RAID volume, and I installed Windows on the first 40g partition.
I then installed Fedora Core 14 (the gaming spin). Of course I did the customized partitioning. Fedora saw the RAID volume just fine. I used Fedora to carve up the remaining space including the CLFS partitions, and the media partition that would be shared between Windows and Linux.
I used the development multi-lib version of CLFS available online (Version GIT-20110130-x86_64-Multilib). Building all the packages went fine (once I installed all of the devel packages on Fedora). I also used the instructions in the beginning to format the volume with a compiled version of e2fsprogs to keep Fedora's SElinux from adding attributes to the ext3 volume. I used the chroot method to continue building when the toolchain was done. When building the kernel, I selected all of the AHCI/SATA/and RAID drivers that looked pertinent to my hardware. (However, the section that listed specific SATA RAID drivers did not list the AMD SB7xx series). To make the system bootable, I chose to use Grub as installed by Fedora, rather than overwrite it with CLFS's Grub. I've done that kind of setup before. In its prior incarnation, my system was Windows XP/Ubuntu/CLFS with Ubuntu managing the boot loader (single SATA disk instead of RAID).
Now, I knew that getting the root= line correct in grub would be tricky, but I'm having a heck of a time getting this to work. Fedora sees the CLFS disk as /dev/dm-6. I knew that using root=/dev/dm-6 on the kernel line most likely would not work. And Fedora uses UUID's anyway. So, I found the UUID of the volume that CLFS is on, and used that in grub.conf. I am 99% sure that the first line "root(0,5)" is correct because if I use anything else there, I get a grub error, and the kernel doesn't even attempt to load. So, my problem is in the root= parameter on the kernel line. I've tried specifying the /dev/dm-6, the UUID, and the device mapper ID. Each time, I made sure that the '/' mount point in fstab matched the root= on the kernel line of the Fedora's grub.conf. Every time, I get the kernel panic with VFS unable to mount root fs.
I also decided to try building CLFS's grub 1.98 to see if that would work. Grub compiled and installed fine. Running the grub-mkconfig command resulted in an error that said grub didn't think a root '/' device was mounted.
I've done a little research and found that what I thought was hardware RAID was actually "fake RAID" (not to be confused with software RAID). Anyway, a very helpful site pointed me in the direction of the dmraid package. I found dmraid and device-mapper in CBLFS and used those instructions to build and install device-mapper, install the device-mapper boot scripts, then build and install dmraid. I still get the VFS kernel panic when booting.
I'm guessing that Fedora uses device mapper and dmraid. (hence the "dm" volume designations in /dev) So, I know this has to be possible. I just don't know what I'm missing. Any ideas out there?
Thanks for the link! I'll have to run through it tonight. I was wondering if I was going to have to use initramfs with the kernel to make this work.
Yesterday, I found that the AMD 710 south bridge RAID is based on the Promise RAID. So, I recompiled my kernel with all the Promise SATA options selected. It still would not mount the root fs with the new kernel. I also ran the debugfs command on my clfs volume just to make sure there was nothing out of the ordinary. It had the following extra items: has_journal and ext_attr. I expected to see the has_journal because I formated the volume with -j to make it ext3. But, does -j also add the ext_attr option, or is that an artifact of the host system that may be preventing the clfs kernel from mounting the root fs?
If following the instructions in the hint does not work, I think I'm going to re-format the clfs volume using the compiled e2fsprogs from clfs to get rid of the ext_attr. Hopefully I can tar the entire volume before hand so I don't have to start over from scratch. (pun intended)
The hint seems to be a little out of date. None of the UDEV rules mentioned in the hint exist in the current UDEV package. I also wanted to use the latest mkinitramfs available in the cblfs book, but I had trouble getting the script to see/use busybox correctly.
I decided to go a different route. I have a 160g IDE drive that was only being used to backup files to. I created a partition on that and copied everything from my original CLFS volume to the new volume on the IDE drive. After a slight mod to fstab and grub.conf, it booted right up. I'll use the IDE drive for my CLFS root volume and put opt on the RAID volume when I begin building CBLFS. This isn't exactly the configuration I had in mind, but at least it works.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.