LFS 5.7 - Glibc configure doesn't have proper CC var as defined in 5.2
In section 5.2 there is a comment regarding section 5.7 (Glibc): (emphasis mine)
Quote:
Code:
# Build tools. Checking ../configure --help there is a statement that CC is an environment variable. But I don't recall a step where CC was set as an env var and it is definitely not set in my shell. Do I need to do the following to move forward:
Not sure if I would also need to remove anything from $LFS/tools. Or can I ignore this and press forward? I saw another thread today where someone said they made it much farther simply pasting in the commands provided in the book, but they also ran into some issues later on, so I want to make sure I get mine correct. Thanks. |
CC & CXX are env vars used on unix-like machines to tell the build system what compilers to use at build time. They usually point to GCC/G++, though they can point to Clang or icc (Intel's compiler), etc. If you look at BLFS and other things like Arch Linux package builds, you'll occasionally see a package that does this:
CC=clang CXX=clang ./configure --prefix=/usr --disable-static --sysconfdir=/etc Those are usually the cases where it builds better with llvm/clang, or sometimes one or the other is specified because it doesn't build at all with the other. If they vars are empty, most packages will go ahead and look for GCC on a Linux system. You absolutely need GCC to build LFS as the kernel currently doesn't build with anything else without major patching. |
Thanks Luridis. My specific question though is whether I can leave it as it is currently set in the configure script:
Code:
CC=gcc Code:
CC=i686-lfs-linux-gnu-gcc 5.2 rather emphatically states that it is important to get this correct now else problems will happen later, so I want to avoid that. Also I'm assuming i686-lfs-linux-gnu-gcc is auto-generated somehow but there is nothing in my instance of uname that says "i686" so I'm not sure where that came from. I'm wondering if the guidance in 5.2 is mistaken, or is the result of someone copypasting from their environment in an attempt to provide guidance, or perhaps it was required previously but is not outdated? |
What are you compiling on? 64 or 32 bits?
Also, if you've gotten as far as section 5.5, you will see either i686-lfs-linux-gnu-gcc or x86_64-lfs-linux-gnu-gcc as this cross compiler target name is created by this line in your lfs user's .bashrc Code:
LFS_TGT=$(uname -m)-lfs-linux-gnu Code:
source ~/.bashrc Code:
/mnt/lfs x86_64-lfs-linux-gnu POSIX /tools/bin:/bin:/usr/bin |
This is a 64-bit system. Host is an Ubuntu 18.04 VM.
Quote:
lfs user .bashrc is in place and is properly sourced: Code:
lfs@lfs-host-bionic:/mnt/lfs/sources/glibc-2.30$ echo $LFS $LFS_TGT $LC_ALL $PATH Quote:
Also how do you get your `echo` statement to appear inline? I tried a variety of BBCodes I found online including inline, pre, tt etc but none worked. |
Note also that per the statement from 5.2, running the ../configure script for Glibc should modify the config.make file, but in my case it did not.
I just wiped my Glibc source, re-extracted the tar and then followed 5.7 exactly. I re-ran the Glibc ../configure script exactly as given in 5.7: (copypasted it into the shell) Code:
../configure \ Code:
lfs@lfs-host-bionic:/mnt/lfs/sources/glibc-2.30/build$ cat configure-output.txt Code:
lfs@lfs-host-bionic:/mnt/lfs/sources/glibc-2.30/build$ grep CC config.make So really what I'm asking is, do I need to manually modify the config.make file (and if so, why? what happened?) or can I press forward with its current CC=gcc value and proceed forward? (I'm assuming I need to manually modify the file but want to be sure before I proceed) Thanks. |
I'll have to do that section myself and look, meanwhile post the results of printenv as the lfs user.
|
Thanks. Here's the printenv:
Code:
lfs@lfs-host-bionic:/mnt/lfs/sources/glibc-2.30/build$ printenv |
I think I know what's going on now, but I'm double checking it now. I haven't done Chapter 5 in a while because I have a tarred up /tools that saves a couple hours. Anyway, I think you're confusing the output of config.guess with the contents of config.make. And, right now, before I even look CC=GCC is what I would assume is correct.
|
Hmm. I'm looking at this section further down in 5.2, not the yellow note about config.make higher up.
This is the 4th para from the bottom of 5.2. Note I'm version 9.0, here's the link direct to the section: http://www.linuxfromscratch.org/lfs/...technotes.html I'll bold the sentences I'm focusing on. Quote:
I could be misunderstanding something there though, I don't compile code on these platforms so this is new. |
On a Core i5 Asus laptop running Manjaro Mate, section 5.2 config.guess run...
Code:
x86_64-pc-linux-gnu Code:
x86_64-pc-linux-gnu Code:
grep CC config.make Code:
CC=x86_64-lfs-linux-gnu-gcc \ All of this is probably why many experienced LFS builders, including me, recommend not using Debian hosts to build LFS. I haven't successfully started a build on Debian in at least a decade. They always seem to bleed something over from the host's build environment. As for what I use: Arch, Manjaro & Slackware have all worked fine for me. If you're comfortable working exclusively in the terminal, Arch can do it with just a couple of meta packages installed: base & base-devel along with linux, grub, sudo, linux-firmware and maybe lynx, vim, & gpm if you're not running from a GUI. |
Thank you very much for the time. I'm working solely from the command line now and I don't have any issue switching to Arch. So I'll give that a try.
My whole purpose for doing this project is to finally *actually* learn Linux at a deeper level instead of googling commands when I use Linux once a year or even less. Instead of following a tutorial that just teaches commands I want to learn how it is engineered and works internally first and build up from there, to establish a foundation on which I can build as I move deeper into security. Since the purpose is to learn not to build a system for use then working in a VM is ideal, and I arbitrarily picked Ubuntu since it is popular. All that is said to say I'm comfortable putting in the work, but when I run into issues like this I don't know what to do, so I have to reach out for help here. And I really appreciate the time you put in to get me onto a better track. I'm sure I'll ask more questions as I go but I'll try to get back to at least this point on my own with Arch first. :) |
Here is what you'll need, in order, for a pretty complete development BLFS console system. Just about any serious use of the system will need these things installed. Note that there's a lot I'm not putting here, like my LFS customizations, custom patches, seds an configure lines.
Legend for my notations, even though many need updating at this point. package* - Missing optional dependencies, usually desired. +package - Optional package rebuild, usually to pick up desired dependencies. [package] - Pretty necessary rebuild, do to security functions. (Linux-PAM) Code:
LFS EFI Addions: (skip Book GRUB), re2c (before Ninja), |
Quick question Luridis:
The host I'm using now is an Arch 2019.05 VM prebuilt from osboxes.org: https://www.osboxes.org/arch-linux/ When I installed base-devel it installed Binutils 2.33.1. Section 2.2 states "Versions greater than 2.32 are not recommended as they have not been tested." Since you have experience building LFS on Arch, do I need to go through the process of downgrading Binutils to 2.32? (which may involve downgrading dependencies, I'm still researching from the Arch downgrade instructions) If not I'll press on but want to be sure before I do. Thanks. Edit: Checking the version history I see Binutils went from 2.32 to 2.33.1. Checking the release notes there were only new features added, no breaking changes reported. https://ftp.gnu.org/gnu/binutils/ https://lists.gnu.org/archive/html/i.../msg00006.html Given that I'm going to press forward with 2.33.1 but I'll check back here periodically to see if you have any warnings against this, and if so I'll rewind to 2.2 and downgrade as needed. |
All times are GMT -5. The time now is 09:12 PM. |