Slackware's performance is poor compared to other "big" Linux distribution
SlackwareThis Forum is for the discussion of Slackware Linux.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I just installed Slackware 14 on an obsolete Asus 900 netbook, with KDE, and everything reacts just fine. Firefox seems especially quick-running.
I don't know anything about "bench tests", but I'm totally happy.
Besides, I know from past experience that ONLY Slackware will execute line commands from learn-Unix books without any weirdness. I love that.
The only difference is the RAM setup, and IMO that is a reporting issue.
there is a 20% difference in dhrystone, as well as in pure FPU related benchmarks. When using the same binary, or the same compiler with same optimization levels, this indicates a performance difference of the underlying hardware.
It is fundamentally flawed to introduce additional parameters of variation by testing on different machines (indicated by what you call "RAM setup"), even if they sport the same specifications on paper.
Speed in only one of aspects of a distribution. Theres security, stability, flexibility, installation, maintance power,hardware support, general goals,etc. All that matters. Theres even some kind of philosophy behind to whats is knows as a distribution. Around the internet there are lots of people that dont know whats a distribution is, you can verify that by typing "best distro" and take a look of the nonsense that comes up.
Reviewers normally install the distro, try to play some MP3 files, some videos, youtube. If that dont work well, them ur distro is not ready for public or something(fedora). They look into the software installed, if something they like is missing or there is something they dont aprove installed it gets a lower score. Lets all keep preteding that linux users are the total retardeds that dont know what a codec is, whats a driver is, or the difference between software and a web page.
I dont think Phoronix is a bad site, u just need to understand what they publish, and how they do the benchmarks. They try to focus into graphics and speed(new drivers, new hardware, boot times), maybe thats why they putted slackware beta in there. But Slackware is not that worried about speed. First cause no one that uses it complains that its slow, second cause makes no sense to build full distro making speed as a goal. If u (or they) want super speed u need to sacrifice somewhere, normally in stability and security. We just dont do that.
u just need to understand what they publish, and how they do the benchmarks.
And exactly because I understand how they (or better he) do the benchmarks I have unsubscribed from Phoronix' RSS feed and double-check information posted there. They simply aren't able to deliver proper benchmarks.
No two Linux distributions use the same kernel configuration, the same software versions, or the same optimizations to their software. You can't even begin to say there are proper ways to benchmark two distributions against each other.
There is no accurate way to test one Linux distribution against another, because if you did manage to equalize the software, then you'd be left with only one single Linux distribution. I mean seriously, how can you compare XOrg 7.7 against another XOrg 7.7 with the same cflags and optimization levels? You can't plain and simple.
Go read a GCC Wiki sometime about optimization flags. If I wanted I could rebuild my LFS from scratch again, but this time, retune everything to the highest and most dangerous levels of performance, sacrifice all stability, and envoke the -O3 or higher cflags and literally blow every distribution out of the water, but in the end it doesn't mean my software and system is practical because there is no stable software.
I don't fully agree.
It makes sense to see, how different distros perform with regard to defined tasks, overall.
Some operating systems claim to be optimised for use on a file or media server, so GUI performance is rather irrelevant, but data throughput and network performance matters. A benchmark against other systems, including multi-purpose systems help to verify that, if the system keeps the promise given by the vendor (or distributor, if you prefer).
Another system might be optimised for scalability for multi-user scenarios or parallelised tasks. And yet another one might be optimised for destkop responsiveness.
What I am going to say is, that benchmarks won't tell you, if one system is generally better than another one (let misconfigured systems aside), but only if it is more or less suited for a given task or usage scenario.
Of course, other aspects, such as security/vulnerability, stability etc. have to be taken into consideration, as well, when it comes to select a system. For a standalone desktop system behind an effective firewall within a well-protected network or withough network connection, security may be less relevant. For a backup server or NAS CPU or graphics performance may be less relevant than reliability. For a 3D CAD workstation, 3D vector graphics CPU performance is all that matters, and for climate simulation RAM and CPU power counts more than anyhting else, and for a gaming PC framerates are key.
General benchmarks help to find out, what a system is good for in its default configuration. And while it is true, that you can always change the results by optimising the setup, this means that you can use it for a given purpose without a lot of set up work, out-of-the-box.
Special purpose benchmarks, however, can also be done, if a system is going to be used dedicated to exactly one use case. E. g., during an RfP you may request the competing vendors to optimise their systems for your intended usage scenario, and run benchmarks simulating later usage on the different systems. Based on the results (and price, of course) you may select the vendor and the system you like best. But this is completeley different from what Phoronix does!
How stable is one distribution going to be if they use solid -O3 optimizations compared to another using solid -O2 optimizations? Linux is about stability and not about performance.
Again, speed isn't the issue, and because no two distributions are alike, accuracy can not be measured. How accurate is a measurement going to be against two system running a 3.2 kernel and one running a 3.5 kernel? How about XOrg 7.6 versus 7.7?
Why not just benchmark individual software packages to show the difference between Kernel 3.2.29 and Kernel 3.5.4? That would be more accurate of measurements than systems across the board with vastly different packages and builds.
You're basically trying to bring bias into a system where bias shouldn't be attempting to influence user levels of distributions rather than letting users decide how well founded a distribution is and how well managed and manageable the system is overall.
Because Linux allows the user of the system various levels of customization, there is no accurate way to say which distribution is best for CAD, games, office work, servers, etc. It simply can not be done without introducing bias. each system can be effectively rebuilt to whatever purpose the users wants of it.
Benchmarking Linux distributions is what it is... utter and total bullshit.
Benchmarking different package versions of software... now that's a real comparison of abilities.
Comparing different versions of the same software package in the same system may make sense, but it can also make sens to find out which optimisation is useful for your task. The same software may perform differently depending not only on package version, but on different systems, and benchmarks helpt to identify to select the platform.
As I said (and you repeat): Of course, you can turn just about every Linux distro into anything optimised for any purpose. But depending on that very purose it takes a different amount of effort, often more than just installing a different OS or distro.
If you are not into exploring your OS for the sake of it, but just want to get a job done, benchmarks can be useful, and in a professional environment (specialised and general) benchmarks are used to compare different offerings. During RfPs, test installation are also exposed to attacks by the information security experts of the company or governmental organisation, too, so benchmarks are not everything. But they include load test and can reveal that soem systems are more robust under heavy load than others.
While you as a hobbyist may enjoy to recompile everything with your preferred optimisations, the vast majority will find that too much hassle, and in a professional environment this is just not acceptable and could even cause loss of warranty or vendor support, depending on the contract.
For instance, a home user wants to watch media streams, and have a responsive desktop. A distro that is optimised to run on servers may just feel sluggish in such a scenario, and the average Linux user today is not interested in re-compiling a kernel or even C libraries.
Your argument, that anything is possible with Linux, is correct, but irrelevant for most users. While *you* may have the skills and knowledge and the guts to rebuild your system as a whole, most users certainly want a system that satisfies their needs out-of-the-box as far as possible and with optimum performance.
So, if your point is that benchmarks are not suited to reveal the *potential* performance, then you are right.
But potential performance is irrelevant for users, can cannot really be measured. So this argument is true, but pointless.
Yes, but as I said, even if one distribution can do one thing faster doesn't mean the overall system is going to be more stable, reliable, and even then, more or less user-friendly. Newer and faster versions of software often still have bugs, security risks, and even instabilities with other packages that may or may not make them overall better in the long run.
Distributions like ArchLinux and Gentoo which use Rolling Release updates straight from the package developers often end up with tons of patches applied from the upstream just to quell the torrent of bugs than show up in their packages.
Gentoo and ArchLinux have some of the highest usages of bug and stability patches outside of Red Hat, Debian, and Ubuntu. Why? Because they use packages that haven't been thoroughly tested against other packages in the system. Some of these distributions have packages with as many as 20 separate patches just for one single piece of software because every other dependency used to crank up that much more speed caused something else in the code and the execution to break.
Slackware by far has some of the lowest uses of patches outside of LinuxFromScratch. Why? Even in Slackware's -Current releases all packages have to be extensively tested before even making it out of Patrick's private -Testing branch. Some packages make it out without patches, but if a package just requires too many patches to even become stable, often it isn't worth it. people saying Slackware's -Current tree is a rolling release... far from it. Current is far from being rolling release.
Look how long it took to get Xfce updated. Why? Because of too many instabilities in 4.8 and too many usages of dependencies that required even more levels of testing. In the end after 4.10 was released and enough testing was accomplished with the dependencies and enough stability was found, it got rolled out.
While it is nicer to promote a faster system in the books and on paper to make it look appealing to new users, it can just as easily drive them away when they try it out and suddenly they end up using a broken mess of half-assed software. I've used some of these so-called faster system like ArchLinux and I've found them to have great levels of speed, but with a severe cost to it's stability overall, and while ArchLinux does have a -testing branch for it's software, it's testing branch does NOT have the same level of quality assurance that a distribution like Slackware does, nor do all the packages made for Arch go through the same level of scrutiny. As I said, Arch is fast as lightning, but many times, I've found Arch to be a huge unstable pile of shit at times. I've found Ubuntu to often be so under-documented that users can get lost trying to figure out the system, even though it looks easy. And I've found the hardest distributions to use like LFS and Slackware to wind up being the most stable and reliable of all the distributions because enough care went into the system to avoid the pitfalls of trying to be the poshest distribution on the block catering to an audience that is only looking for something easy when nothing is fast and easy.
I often associate the terms fast and easy with a slut. And I don't need a slutty Linux distribution.