Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Built my LFS system following the latest book (6.8). Followed along and everything compiled great. When I got to the part where you boot into the new system, it wouldn't boot. I get the error "Cannot open root device /dev/sdf4 or unknown block(0,0)"
The system will however boot using the exisiting Ubuntu kernel that I had. This grub entry uses an initrd. When I took that line out of grub.cfg, it gave me the same result as above. Does this mean that I need to create an initrd for my LFS kernel? I'm not even sure what I would be using it for. ?
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
Posts: 3,233
Rep:
are you SURE it's /dev/sdf ? how many hard drives DO you have?
/dev/sda = first hard drive
/dev/sdb = second hard drive
/dev/sdc = third hard drive
...
/dev/sdf = sixth hard drive
do you really have SIX hard drives in your unit?
post the output of 'fdisk -l' (must be run as root)
Just because your host system sees it as sdf doesn't mean that your LFS system will call it that. Try different letters (sda, sdb and so on). Did you compile your kernel to use libata or the older IDE driver? libide used to call the devices hda not sda so It could be that which is the problem. Are you sure that you compiled your kernel with support for your motherboard's chipset and the filesystem on the partition? Don't mess about using your ubunut kernel and initrd. Concentrate on compiling a kernel that boots. Make it a monolithic blob with no modules to avoid problems
Thanks! I'll give that a shot tomorrow! Should I completely wipe the drive? Should I use a livecd otter than ubuntu? Maybe it doesn't matter as long as I don't use the ubuntu installer partition editor?
Just using a format to EXT3 worked for me. I have all the partitions mention in the book, /, /home, /opt, /boot, /opt .....
So I needed to format each one and copy it all back. If you use the method in chapter 2. 3 - using the e2fsprogs in a temporary directory that should work I think
Still not working, but I'm not done trying. I was using a Ubuntu live CD to try to rebuild, but it still called the drives sdf and sdf (I broke the mirror). Now I'll try building using the LFS Live CD, which calls the drives sda and sdb and see if it comes out any different.
Old thread, but I've had this error and managed to resolve it with the lfs-initramfs from the Root_FS_on_RAID+encryption+LVM (I can't link to it, but it's the only RAID hint I've found).
The original problem was that udev wasn't populating any device nodes for my hard drives (Identified as sd{a,b,c,d}) - my RAID is based on partitions from all three drives - sdX(0,1,2). Thanks to the initramfs' capacity to provide a shell I managed to boot into sh and take a peek around, but very little seemed to be available to help. All my programs were in the LVM /usr partition which is obviously on the RAID it can't find, so I was a little stuck.
Firing back into the host distro I fiddled with the initramfs options and init script to pull mdadm's /etc/mdadm.conf in (which I pulled across from the host Debian live distro - it called my devices the same thing) and also made sure mdadm's "HOMEHOST" value matched the one I built the RAID with (Visible from mdadm -D /dev/mdX - it's the name of your RAID before the ":", eg Mainframe:0 for mine.
I also figured out that my kernel didn't have the right driver. Probing the host system revealed that the SATA controller was using the "sata_nv" driver module - nVidia chipset - so I rebuilt with that as built-in rather than excluded. (derp moment).
This finally allowed CLFS to boot up, but only as far as LVM. Because /usr and /var are mounted on LVM (as is /home, but less relevant), it wanted to mount those partitions and then fsck them, but that obviously doesn't work. I also tracked down a "Relocation error" in libdevicemapper.so which occurred when vgchange was being run during boot to activate the volume group with my logical volumes in it - so the system still won't boot.
I'm currently building a different version of device-mapper and the other userspace tools to fix my new problem, but the faffing mentioned above did sort the original problem. Note as well that when booting GRUB, a very helpful edit to the "kernel" line (hit 'e' before selecting an option) is to add "rw init=/bin/bash" and boot that. This will give you a root shell (no password needed) within your system to fix errors which come in after the root device is found. lfs-initramfs will drop you to sh if the root device doesn't mount.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.