Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm currently waiting for the second pass of gcc to finish its make. As the other packages left have less SBU's I'd like to run a parelle make but am a bit confused on whether it is possible with a Pentium 4 HT as there are two logical CPUs but essentially its just a single core.
If more then one processor is mentioned you can use export MAKEFLAGS='-j 2' or make -j2
One thing I should mention about using parallel make: Not all packages like this option, if a make (or make check) fails with the following message try removing the -j2 part and rerun the command.
BTW: Do not use a number greater then the amount of cpu's (example: if you have 2 cpu's/cores, do _not_ use -j3)
It depends: If you have a fast storage subsystem there is no problem with using higher numbers. I do all my compiles on a tmpfs in RAM and I can easily run my compiles with -j10 on my 6-core machine. On a machine with Hyperthreading, especially in its older versions (like the OP's Pentium 4) this is of course not really advisable.
I was giving advise with the Pentium 4 HT in mind and the recollection of an earlier thread that came across this issue (the build failing miserably).
We could start a general discussion about the actual added value of assigning more core's/cpu's then actually present, but i'm sure this is not the place to do so
If you only have one core I wouldn't use the -j2 flag.
The above is with your hardware in mind, although I do think there isn't a point in doing so even if you would have a fast machine (fast storage subsystem / tmpfs in RAM / etc).
I haven't actually tested this but I do believe both threads are being handled by one core (assuming one core cpu and -j2) and this will not increase the build speed. I wouldn't be surprised if it will be slightly (??) slower when using -j2 compared to -j1 (or no -j flag).
from my point of view you would need to check it, probably helps.
I just did a little searching and came up with 3 conflicting "rules":
1) amount of cores dictates the -j number (2 cores => -j2),
2) use 1.5 times the amount of cores (2 cores => -j3),
3) use 2 times the amount of cores (2 cores => -j4).
Well, considering as I have just around four more to go to complete Chapter 5 and they have low SBU values, will just give it a try, for better or for worse.
After searching a bit, I'm a bit confused on how to use tmpfs on RAM. What would be a suitable material to refer?
I just did a little searching and came up with 3 conflicting "rules":
1) amount of cores dictates the -j number (2 cores => -j2),
2) use 1.5 times the amount of cores (2 cores => -j3),
3) use 2 times the amount of cores (2 cores => -j4).
I have to agree with pan64: You need to do some testing to figure out what is fastest/best for your scenario.
EDIT: Just quick tested the above rules using binutils and this is the result (I'm having an 8 core cpu):
make -j1 -> real 2m0.806s make -j8 -> real 0m31.561s
make -j12 -> real 0m31.790s
make -j16 -> real 0m32.232s
Sorry druuna, I didn't see your reply. This seems very interesting and promising. I'll do some checking and be back.
BTW adding a typical newbie question: would it be a problem if I run binutils or gcc again to test this? Because I guess its no point of running it for lesser SBU values.
BTW adding a typical newbie question: would it be a problem if I run binutils or gcc again to test this? Because I guess its no point of running it for lesser SBU values.
If I where you I would not use your (partial) LFS build to test this: Use your host.
You do need to remove both the build and source directory before starting the next test. Here's how I did the quick test:
Code:
$ tar xf binutils-2.23.1.tar.bz2
$ cd binutils-2.23.1
$ mkdir -v ../binutils-build
$ cd ../binutils-build
$ ../binutils-2.23.1/configure --prefix=/usr --enable-shared
$ time make -jX# timers are shown once the make command finishes# get ready for next test:
$ cd ..
$ rm -rf binutils-2.23.1 binutils-build
# start from the top again
You cannot use fractions for X, so with having one core in mind I would use 1,2 and 3 for X.
I've learnt that one should use the amount of cpus +1. I have 4 cpus (2 physical and 2 virtual). This is what I got when I did the same test with binutils:
Code:
-j4
real 0m53.836s
user 2m34.011s
sys 0m12.939s
-j5
real 0m53.418s
user 2m38.521s
sys 0m13.489s
-j6
real 0m53.845s
user 2m36.961s
sys 0m13.354s
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.