Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm quite new to Linux, been finding this forum LOTS of help though. Not had to make a post before, usually been able to find an answer without needing to.
Anyway, I realise there are many posts on this topic already, but despite reading lots of them I haven't been able to find an answer, so I'm hoping if I post my own specifics someone might be able to help.
I just finished building the basic LFS system from book 7.1. I want to make the LFS system bootable from my existing GRUB, so within my host system (Ubuntu 11.04) I edit the /boot/grub/grub.cgf file to include the following code which i nabbed from a post courtesy of Druuna and modified for my own system:
Code:
menuentry "LFS 7.1 - linux 3.2.6" {
insmod ext2
set root=(hd0,6)
linux /boot/vmlinuz-3.2.6-lfs-7.1 root=/dev/sda6 ro
}
The partition for my LFS system is sda6, which is also the boot partition. When I reboot the system I am able to enter the GRUB menu, and I can see an entry for the LFS 7.1 system. When I select that option to boot I get an error message:
Code:
error: file not found
press any key to continue...
If I chroot into the LFS environment and do ls -l /boot I get the following:
sudo update-grub
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-2.6.38-8-generic
Found initrd image: /boot/initrd.img-2.6.38-8-generic
Found memtest86+ image: /boot/memtest86+.bin
done
Doesn't seem to find the LFS system automatically.
Not sure about booting from the GRUB prompt, i'm not familiar with the commands but I can give it a go if someone can advise or if I know what I need to do from there.
I,m not saying this will boot the machine but it will let you know where the kernel is.
While at boot menu press c you will enter at grub prompt grub >
type set root=(hd0, then press the TAB key that will list all partition, hopefully your lfs will be listed hd0,msdos6. If it is Type the 6 after (hd0,msdos6) It should look like this.
grub > set root=(hd0,msdos6) hit enter
grub > "type" linux /boot/ then hit TAB that will list kernels on that partition it should show the kernel you want. If you start typing vml the TAB will self complete
you should have grub > linux /boot/vmliunz-3.2.6-lfs-7.1 then enter.
then at grub > "type" boot
it should boot if all is good. This will also show the path to the kernel is correct.
It all looks ok I dont see why it wont find the kernel.
At grub prompt set root=(hd0,6) enter then try linux / then use tab. I,m not expecting it to find it.
Then Maybe try restart run through it again but try grub >linux /boot/vmlinuz-3.2.6-lfs-7.1
I have gone through the steps listed above. I have set the root the hd0,6 and then typed 'linux /' at the terminal and hit tab. This autocompletes the rest of the command to read 'linux /lost+found/' so I guess this is the only possible entry (didn't give me any other options). This in itself seems interesting as I guess at this point it should find my file system on hd0,6 which should contain more than just /lost+found.
Anyway, I continue with the instructions, reboot and try the second set of commands listed but again get 'error: file not found'.
One thing that has occurred to me (please forgive my noobness) is that I have set the fstab file and all other config files that concern the hard disk to refer to the hard disk as 'sda'. This is because, using a disk utility in the hosts system (ubuntu), this is how they are listed. However, my hard disk is connected via IDE (not SATA) so as far as I understand it the disk should be referred to as 'hda'. Maybe Ubuntu does not stick to this convention, which is why they display as 'sda' in the disk utility inside my host environment, but maybe for the purposes of LFS I should use the standard convention.
stoat
Something similar has been caused before by "CONFIG_DEVTMPFS is not set" in the kernel configuration. Anyway, it can't hurt to check. If you still have your .config file in your kernel source directory, or if you can mount your LFS partition and examine the config file in /boot, then search or grep for CONFIG_DEVTMPFS=y. If yours "is not set" instead, then correct it and recompile the kernel...
Originally Posted by Page 226 of LFS v7.2
Device Drivers --->
Generic Driver Options --->
Maintain a devtmpfs filesystem to mount at /dev
Distribution: Void, Linux From Scratch, Slackware64
Posts: 3,150
Rep:
Couple of things that seem odd but may not be relevant you are using insmod ext2 in grub.config but blkid says sda6 is ext3, stuff in the fstab is irrelevant as you havn't got to that bit yet as you haven't booted, are you using an initrd? if so the path to that needs to be in the grub.config entry, have you set your kernel to recognize ext2/3 file systems?
Try using grub1 not grub2 as it is easier to configure, also you may want to try lilo ( I haven't used it ) as it seems easier to set up than grub.
Last edited by Keith Hedger; 09-20-2012 at 02:16 PM.
Spiky:
Here is the output of 'ls /mnt' after mounting /dev/sda6 to /mnt.
Code:
lost+found
Shouldn't I expect to see the contents of the root of my LFS file system there?
Regarding the kernel configuration, I definitely set those options during the set up, and I double checked when I recompiled the kernel when following Keith's advice below.
Keith:
Following the LFS guide I created an ext3 file system by issuing the following command using e2fsprogs.
Code:
mke2fs -jv /dev/sda6
During chapter 8 the LFS guide instructs to create a grub.cfg file which includes the line 'insmod ext2'.
As I wanted to use the existing GRUB from my host system I added the menu option shown in the post above to the host's grub.cfg file, which was based on a suggestion made in another post.
However, since then I have also tried following the LFS guide exactly, which involves overwriting the host GRUB from within the LFS environment and then writing a grub.cfg file as follows (which also uses the 'insmod ext2' line).
Code:
# Begin /boot/grub/grub.cfg
set default=0
set timeout=5
insmod ext2
set root=(hd0,6)
menuentry "GNU/Linux, Linux 3.2.6-lfs-7.1" {
linux /boot/vmlinuz-3.2.6-lfs-7.1 root=/dev/sda6 ro
}
When I try this method I get a different error, but the system still does not boot.
As far as I am aware I am not using an initrd for the LFS system as nowhere in the guide does it instruct to do so. From what I understand, my host system (Ubuntu 11.04) does not use an initrd by default and I have not configured it to do so.
I wasn't sure if I had configured the kernel to recognise ext2/3, so I recompiled and installed all the options that related to ext2, of which there were several, but I still receive the same errors.
One last thing I noticed whilst running these diag's (which I didn't spot before) was an error message I received when trying to umount LFS after entering back into the host environment in order to reboot the system into the LFS system.
I run through the instructions to unmount various things and get the following.
Code:
umount -v $LFS/dev/pts
devpts umounted
umount -v $LFS/dev/shm
shm umounted
umount -v $LFS/dev
/dev umounted
umount -v $LFS/proc
proc umounted
umount -v $LFS/sys
sysfs umounted
umount -v $LFS
Could not find /mnt/lfs in mtab
umount: /mnt/lfs: not mounted
Dunno if that is at all relevant but I don't know why I should receive such an error, the $LFS variable is set correctly at this point, but it complains that it cannot find /mnt/lfs in mtab.
Distribution: Void, Linux From Scratch, Slackware64
Posts: 3,150
Rep:
From what I can see you are using sda6 for your lfs and in the grub menu config you are setting root=(hd0,6) grub partitions are zero based so you should use root=(hd0,5). complaints from your host system can for now be ignored as they are not part of the boot process, of course if a partition is not cleanly unmounted when you boot you will have to sit through a disk check.
I am not sure that is the case in this particular instance. Section 8.4.2 from the LFS manual describes this differently.
This seems to be verified by the effect of me changing hd0,6 to a lower value, such as hd0,5. When i do this I get the following message:
hd0,3 - No such partition
hd0,4 - No such partition
hd0,5 - Unknown file system
hd0,6 - file not found
Spiky,
I think this is the crux of the problem. You are correct in terms of the use of the 3 partitions, sda1 is host, sda5 is swap and sda6 should be LFS.
I get the following output from your suggestion:
Code:
df -h /dev/sda6
Filesystem Size Used Avail Use% Mounted on
none 1.6G 632K 1.6G 1% /dev
This is after a full reboot and before manually mounting anything - doesn't seem right.
Because I built the LFS system over several sessions, and to make mounting everything and entering the chroot environment easier, I wrote a simple script.
Code:
#!/bin/bash
#simple script to prep for end enter into the chroot environment
#for building the LFS project.
#Set $LFS
export LFS=/mnt/lfs
#Mount and populate /dev
mount -v --bind /dev $LFS/dev
#Mount virtual kernel file system
mount -vt devpts devpts $LFS/dev/pts
mount -vt tmpfs shm $LFS/dev/shm
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
#Switch to chroot
chroot "$LFS" /usr/bin/env -i \
HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
PATH=/bin:/usr/bin:/sbin:/usr/sbin \
/bin/bash --login
The first thing the script does is to set the $LFS variable. It then mounts all the necessary stuff and switches me into the chroot environment. However, if I try echo $LFS nothing is returned (this is from within the LFS environment). If I logout and go back to the host environment and again try echo $LFS nothing is returned, making me think that maybe the $LFS variable is not getting set correctly and then the rest of the script behavior is not as desired.
It does look as though I have entered the LFS environment when I run the script because my prompt goes from 'root@ubuntu:/home/mel#' to 'root:/#'. That maybe means nothing though. This is probably why n00bs shouldn't try LFS!!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.