Gentoo, Optimization over Time? Can someone tell me the benefits of compiling time?
Apart of my plan is to get Gentoo up and running efficiently after I use Slackware for a time, strictly for experience, and then proceed to BSD for other UNIX fun. Still, what I'm a bit confused on is the compiling of packages that can add up to an hour, or even a day *On desktop environments* of time. I am in no way criticizing Gentoo, nor Sabayon Linux *Which is the distro I was planning on installing to replace Ubuntu on my desktop* but the update time is extremely long compared to other distros. When "Optimizing" the packages, what does it optimize? Would compiling a desktop environment from the source make it run smoother, faster, and make it more stable? If so, I kind of like that fact, but it seems as though if you're in a hurry, updating your computer is the worst thing to do, while on Ubuntu even if I'm about to leave, it only takes around 5 minutes.
Once again, I am not putting down Gentoo, I just want to know what the true benefits are and if they're significant enough to stand out. Also, I'm not especially installing Gentoo, but I'm referring to Gentoo because it's supposed to be based on Gentoo. Also, there's no Sabayon forum, so... |
Hi, even as an avid Gentoo user, I don't suppose that the speed optimisations are of any consequence anymore (5 years ago it might've been a different story). It optimises the generated assembly code : eg by passing the "native" cflag to gcc, it optimises the resultimng binary for your processor (eg using sse4 if available).
The main advantage I see is customization : using the power of use flags, I get exactly the system I want... The rolling release philosophy is a must for me now : no tedious reinstalling of the OS, just an upgrade with (sometimes) a guide to follow. With Gentoo, upgrading is not something to be done lightly (in a hurry). First the compilation itself takes time. If its a bigger upgrade, then some troubleshooting might be needed. From what I know, Sabayon also uses some sort of binary package format, so I don't have a good idea of how it works (the upgrade time should be shorter though)... |
This is more of a generic answer - compilers can optimise the resulting executable based on the capabilities of your specific cpu, and this will provide a speed increase not an increase in stability. I think a lot of people go through the "self compile" stage because it's a great idea to get the best performance possible on your machine, but in the end most people are lazy and impatient and don't want to spend the time to do this regularly so they opt for a precompiled distro.
If you need to get the utmost performance from your machine then it's worth the time optimising the compilation. |
Quote:
|
Do you GNU?
If you have the time to cook it like a ricer!
CFLAGS="-march=native -Os -pipe -ggdb" Only hardware global USE=FLAGS, let portage and the ebuild pull decide. Two months emergeing on a PhenonII x6...base @system (over cook GCC & GlibC & Boost), other wicked sci-* libraries, & kernel. Non-Stop... Then put on X11. Fastest distro I ever used. Until I tweaked the kernel too hard with GDB...sigh...stupid printK ~Jux |
Quote:
What I'm not sure about, though, is multimedia applications (like VLC) where advanced multimedia instructions potentially make a huge difference. I know in Gentoo such instruction usage is compiled in according to your USE flags settings, but I'm not sure how that is handled in the binary-based distros. That would be something to look into. To answer your other question: There are a lot of other reasons to use Gentoo besides optimization. You can read my short article: https://frigidcode.com/articles/what...oo-linux.shtml |
oh GNU? where are you?
glibC is how old? almost 40? Am I wrong? How can you not see a benefit to optimizations and compiling over time. I always compile binutils until it's smooth? I (the builder) want(s) all the MATH involved in these libraries...those translate to functions. And only those useable as determined by a proper (-march=native) GCC for binary (-Os) assembly. X86_64 is not true amd64. Intel chips are not AMD chips. Same math but different binary implementation per chip family, since your looking for streamlining in your source OS...recompiling with optimizations giving your x clocks per instruction decrease over clocks execution, varying on function optimized, is still a whole lot of register bus freed up regardless. Now perhaps same obtainable ratio per dwarf or thread. Dare I say my recipe includes other oldies like GHC & ocaML(ton) over 25x compiles (kitchen sink)?!?! And I just have that sneaking suspicion that Con put some mind-storm code in there somewhere.... GCC (compiler collection) emerges into an amazing compiler after glibc is fleshed out.
GDB is icing on the cake. I had more bugs squished including it in my global CFLAGS, which also need compiling in. I wish Microsoft would build separate w/ optimizations for each manufacture...then benchmark. my 2cents ~Jux http://www.gentoo.org/proj/en/qa/backtraces.xml |
If you are running Gentoo because of the speed, then you are probably in the wrong place.
While it's possible to get some optimization out of this stuff, it's often negligible in real world situations, and often per-package well-thought optimizations and chunck of assembly code do far more than using the craziest gcc flag. Gentoo is just so flexible that you're gotta love it for that, plus it's naturally gifted to work as a development environment; but only a faster processor will run programs faster. If that's what you want, really, save the bucks you are gonna spend in electricity and buy a faster machine some day the next year. |
bringing in the inseams
I dunno, I thought there would be others with my mind set (and personal results) out there. I have minimized my CFLAGS and more importantly USE flags. Optimizations come from -march=native & -Os...stay in register space as much as possible and avoid casheland as much a possible like disk i/o. My argument is more promoting Compiling over time, and that it's finally worth the electricity consumed... coming from LFS to a fleshed out distro. There are results that are obvious, and cooking is necessary to get that flexibility desired molded into shape. When I first explored Gentoo, Debian (Sarge) won out because it was just a pain to spend so much time compiling. Now, I believe, with increase in PARALLEL processing (more so than increased clock speed) MAKEOPTS=-j9 is feasible and makes true tailoring possible for the hobbyist. You need to cook your installation. Sabayon is named after a recipe, thus implying Fabio cooks his source, whatever packages he includes in the his egg flann pastry craziness.
You can keep your static LLVM, I like the evolving compiler collection GCC, especially pulling in real debug info and build.log(s). & yes I've built a Gentoo Dragon from a GCC/LLVM dragonegg. Oh well, ~Jux "Hi Ho! It's off to work we go" - Dwarfs |
compiler
From ubuntu xubuntu debian fedora arch and finally gentoo I can tell you if you've ever had to make a custom software package it would be preferred to use a source based distro. as far as compiling time it is not worth it. but making that software package the way you need it might be worth the 150% extra time.
|
All times are GMT -5. The time now is 07:56 PM. |