Can someone explain to me the logic of LFS's new build system?
Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Took some getting used to but I do like it. I assisted Pierre on testing it during changes. We both used it in pre-published form to build armv8 aarch64 for my Rock64 sbc. Old book was troublesome for arm use.
Try it out for your 64 bit arm sbc's peeps. Pretty sweet.
The backing up is a nice addition but how many will forget to re-chroot afterwards.
ding ding, I did it, using slackware as host, I think it was several steps into chapter 8(don't remember how far I got before I got a bunch of compile errors) and had to stop and fix the slackware host before continuing.
It depends how you like to do your builds. The developers recommend doing it in one fell swoop but I always preferred to do it in installments. If you're used to going in and out of chroot, you don't forget to do it.
I create a script in /root called setup-chroot, which mounts all the necessary filesystems and then runs the actual chroot command. Then I run it at the beginning of each session.
Completed the final gcc tests this morning. From now on, it should be no different from the old way.
Curiously I had 16 unexpected failures in libstdc++ which I have not had on this machine before. They all cluster around two directories: 27_io/filesystem/iterators and 27_io/filesystem/operations, mostly the latter. It seems that this part of gcc-10 doesn't much like my system!
I assume ur talking about the errors that pop up while running the tests, and not in the summary. That's normal. I've had those in EVERY build, of EVERY version of the book, on every host distro I've built it on. System builds just fine. What's important is the summary at the end, which should show only expected failures (which I suspect they are but leave it to the GCC devs to scare the pants out of u when compiling the damn thing!)
No, these are real failures. They occur in the summary too. The point I was making is that in all my earlier LFS versions I got only the six expected failures. In this one I got all these unexpected ones too. But as they are clustered around two operations (and only one bit of the program), there are probably only two glitches that keep coming up. Anyway it caused no further problems.
Distribution: Void, Linux From Scratch, Slackware64
Posts: 3,165
Rep:
Just started the new way of building it's a little awkward, and not as convienient as having all the temp tools in a single tools folder, but I can see why the devs have done it this way, seems a lot less faffing about adjusting the tool chain ( gcc/binutils ) which I would imagine will make it easier to maintain, I think as pointed out above the in and out of chroot may cause some noobs a problem, though like most who have built a number of LFS os's I use a sccript for chrooting which does all the virtual file system stuff.
One minor error is this line in '7.14. Cleaning up and Saving the Temporary System'
Code:
find /usr/{lib,libexec} -name \*.la -delete
Doesn't work it should be
Code:
find /usr/{lib,libexec}/ -name "*.la" -delete
Notice the trailing slash.
Haven't started on the 'real' system yet I will report back.
When I do a test build, I never clean up. It's still
Code:
find /usr/lib /usr/libexec -name \*.la -delete
in the newly released 10.1 which I will be trying out this weekend.
I actually use a chroot to start the build, if I find the time I will share how it works for those who had problems, this are usually not the ones posting here in this thread but the ones making new threads when they have errors spewing at them. Main advantage is that you can use this method to build a much older version of LFS (if you would want that).
Last edited by hendrickxm; 03-04-2021 at 03:13 AM.
I am doing lfs-10.1 using my old lfs-7.3 chroot with updated binutils, gcc and make. Also added libffi, expat and Python-3. I had an error when compiling pass2 of binutils-2.36.1 in the temp-tools and tried binutils 2.36 which gave me the same error (libcft). The book does mention something about it but I actually had a different error there. I did find some similar problems with libcft when using older glibc. The chroot I use is indeed old. I prefer to build with an old chroot because that way I can build older lfs-versions as well. Anyway, I settled for lfs-10.1 with binutils-2.35.1 and attr-2.4.47 because I wanted to get rid of the getaddr error (which was there with lfs-10.0 as well) although it got replaced with another harmless error when using attr-2.4.47. Gcc-10.2.0 had 17 unexpected error with g++, nothing that worries me but it is more then I had with lfs-10.0. I will finish up this weekend. So far nothing really exciting, a bit of an issue with binutils 2.36 and up.
I finished LFS-8.3 with gcc-8.4.0 and glibc-2.27 and you can try it here: https://drive.google.com/file/d/1Fq1...ew?usp=sharing
Only 2 unexpected errors for gcc reported .
Last edited by hendrickxm; 03-27-2021 at 01:29 PM.
Currently doing LFS-10.1. Seems, now LFS using new method.
First, I tried building LFS using OLD method.
In this case using LFS-9.1 books, but the packages version are LFS-10.1.
I completed tools, entering chroot, and do it until section 6.10. Adjusting the Toolchain
All the linkers work perfectly. So I stop the progress here.
Then, I tried building LFS using NEW method.
So far, I am on section 7.4. Entering the chroot Environment .
Yes, entering chroot perfectly.
I will continue tomorrow.
I did old method to get rid of curiosity.
Can the old method still be used? Yes, it can.
But, I love the new method of building LFS.
Last edited by anak_bawang; 04-26-2021 at 03:49 AM.
I tested the new method on lfs-9.X and figured it should work the other way around too, thank you for trying it. I also prefer the new way, makes it easier to add stuff like pkgtools and/or other packaging tools.
What about the new way is better? I started 10.0 and got pretty confused because of the changes, but I have an open mind. Can anyone enumerate the advantages about the new build process?
You can add anything you want from BLFS before you start the final system much easier. Bootstrapping a package manager for example without using ./configure --prefix=/tools.
They documented how to make a back-up of the tools. You no longer need to make adjustments after glibc. At first it was a bit weird but it is worth it.
Also you avoid all that tiresome pre-editing before you can build gcc (both times).
I preferred the old way because it seemed more logical to me: all the intermediate tools went into $LFS/tools and all the final packages into $LFS/usr. But I dare say people who never did it that way will find it intuitive that everything is installed in its final site, whether it's an intermediate or a final package.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.