LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 09-30-2012, 02:38 AM   #46
gfheisler
LQ Newbie
 
Registered: Jul 2009
Posts: 4

Rep: Reputation: 0

I just installed Slackware 14 on an obsolete Asus 900 netbook, with KDE, and everything reacts just fine. Firefox seems especially quick-running.
I don't know anything about "bench tests", but I'm totally happy.
Besides, I know from past experience that ONLY Slackware will execute line commands from learn-Unix books without any weirdness. I love that.
 
Old 09-30-2012, 03:00 AM   #47
cascade9
Senior Member
 
Registered: Mar 2011
Location: Brisneyland
Distribution: Debian, aptosid
Posts: 3,753

Rep: Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935
Quote:
Originally Posted by H_TeXMeX_H View Post
Anyway, it's pretty clear that Phoronix is full of s***.
The phoronix benchmark was flawed because:
1. It was NOT done on the same hardware. Check the first page.
Not this again.....

http://www.linuxquestions.org/questi...ml#post4743738

and

http://www.linuxquestions.org/questi...ml#post4744045
 
Old 09-30-2012, 03:17 AM   #48
H_TeXMeX_H
LQ Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301
All speculation like usual. No, I'll take their word for it, they used different hardware. They've done it before, so it can't be bad reporting.
 
1 members found this post helpful.
Old 09-30-2012, 03:31 AM   #49
cascade9
Senior Member
 
Registered: Mar 2011
Location: Brisneyland
Distribution: Debian, aptosid
Posts: 3,753

Rep: Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935Reputation: 935
Quote:
Originally Posted by H_TeXMeX_H View Post
All speculation like usual. No, I'll take their word for it, they used different hardware.
The only difference is the RAM setup, and IMO that is a reporting issue.
 
Old 09-30-2012, 04:29 AM   #50
Martinus2u
Member
 
Registered: Apr 2010
Distribution: Slackware
Posts: 497

Rep: Reputation: 119Reputation: 119
Quote:
Originally Posted by cascade9 View Post
The only difference is the RAM setup, and IMO that is a reporting issue.
there is a 20% difference in dhrystone, as well as in pure FPU related benchmarks. When using the same binary, or the same compiler with same optimization levels, this indicates a performance difference of the underlying hardware.

It is fundamentally flawed to introduce additional parameters of variation by testing on different machines (indicated by what you call "RAM setup"), even if they sport the same specifications on paper.
 
3 members found this post helpful.
Old 09-30-2012, 08:41 AM   #51
H_TeXMeX_H
LQ Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301
I rand the benchmarks now that I installed a custom 3.4.11 kernel, and the AES benchmarks show that performance has improved significantly. The other numbers are improved, but not significantly.
 
Old 10-01-2012, 12:58 PM   #52
sovietpower
Member
 
Registered: Jun 2003
Distribution: Slackware64 14.1 and -current
Posts: 209

Rep: Reputation: 30
Talking

I'd like to see what Pat or Eric or Robby have to say about this? Can one of you fine gentlemen step in and explain what is really going on here? I for one am very curious.
 
Old 10-01-2012, 01:08 PM   #53
Celyr
Member
 
Registered: Mar 2012
Location: Italy
Distribution: Slackware+Debian
Posts: 321

Rep: Reputation: 81
I don't think there is actually something to say, a decent benchmark shown that there is no difference (look at past post). So what to say.
 
Old 10-01-2012, 04:15 PM   #54
Hyonane
Member
 
Registered: Sep 2011
Location: Brazil
Distribution: Slackware64-current+multilib
Posts: 32

Rep: Reputation: 2
Speed in only one of aspects of a distribution. Theres security, stability, flexibility, installation, maintance power,hardware support, general goals,etc. All that matters. Theres even some kind of philosophy behind to whats is knows as a distribution. Around the internet there are lots of people that dont know whats a distribution is, you can verify that by typing "best distro" and take a look of the nonsense that comes up.

Reviewers normally install the distro, try to play some MP3 files, some videos, youtube. If that dont work well, them ur distro is not ready for public or something(fedora). They look into the software installed, if something they like is missing or there is something they dont aprove installed it gets a lower score. Lets all keep preteding that linux users are the total retardeds that dont know what a codec is, whats a driver is, or the difference between software and a web page.

I dont think Phoronix is a bad site, u just need to understand what they publish, and how they do the benchmarks. They try to focus into graphics and speed(new drivers, new hardware, boot times), maybe thats why they putted slackware beta in there. But Slackware is not that worried about speed. First cause no one that uses it complains that its slow, second cause makes no sense to build full distro making speed as a goal. If u (or they) want super speed u need to sacrifice somewhere, normally in stability and security. We just dont do that.
 
1 members found this post helpful.
Old 10-01-2012, 04:45 PM   #55
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by Hyonane View Post
u just need to understand what they publish, and how they do the benchmarks.
And exactly because I understand how they (or better he) do the benchmarks I have unsubscribed from Phoronix' RSS feed and double-check information posted there. They simply aren't able to deliver proper benchmarks.
 
1 members found this post helpful.
Old 10-01-2012, 08:46 PM   #56
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
The problem with Linux benchmarks is simply this:

No two Linux distributions use the same kernel configuration, the same software versions, or the same optimizations to their software. You can't even begin to say there are proper ways to benchmark two distributions against each other.

There is no accurate way to test one Linux distribution against another, because if you did manage to equalize the software, then you'd be left with only one single Linux distribution. I mean seriously, how can you compare XOrg 7.7 against another XOrg 7.7 with the same cflags and optimization levels? You can't plain and simple.

Go read a GCC Wiki sometime about optimization flags. If I wanted I could rebuild my LFS from scratch again, but this time, retune everything to the highest and most dangerous levels of performance, sacrifice all stability, and envoke the -O3 or higher cflags and literally blow every distribution out of the water, but in the end it doesn't mean my software and system is practical because there is no stable software.
 
1 members found this post helpful.
Old 10-01-2012, 10:51 PM   #57
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Rep: Reputation: 242Reputation: 242Reputation: 242
I don't fully agree.
It makes sense to see, how different distros perform with regard to defined tasks, overall.

Some operating systems claim to be optimised for use on a file or media server, so GUI performance is rather irrelevant, but data throughput and network performance matters. A benchmark against other systems, including multi-purpose systems help to verify that, if the system keeps the promise given by the vendor (or distributor, if you prefer).

Another system might be optimised for scalability for multi-user scenarios or parallelised tasks. And yet another one might be optimised for destkop responsiveness.

What I am going to say is, that benchmarks won't tell you, if one system is generally better than another one (let misconfigured systems aside), but only if it is more or less suited for a given task or usage scenario.

Of course, other aspects, such as security/vulnerability, stability etc. have to be taken into consideration, as well, when it comes to select a system. For a standalone desktop system behind an effective firewall within a well-protected network or withough network connection, security may be less relevant. For a backup server or NAS CPU or graphics performance may be less relevant than reliability. For a 3D CAD workstation, 3D vector graphics CPU performance is all that matters, and for climate simulation RAM and CPU power counts more than anyhting else, and for a gaming PC framerates are key.

General benchmarks help to find out, what a system is good for in its default configuration. And while it is true, that you can always change the results by optimising the setup, this means that you can use it for a given purpose without a lot of set up work, out-of-the-box.

Special purpose benchmarks, however, can also be done, if a system is going to be used dedicated to exactly one use case. E. g., during an RfP you may request the competing vendors to optimise their systems for your intended usage scenario, and run benchmarks simulating later usage on the different systems. Based on the results (and price, of course) you may select the vendor and the system you like best. But this is completeley different from what Phoronix does!

gargamel
 
Old 10-01-2012, 11:06 PM   #58
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
How stable is one distribution going to be if they use solid -O3 optimizations compared to another using solid -O2 optimizations? Linux is about stability and not about performance.

Again, speed isn't the issue, and because no two distributions are alike, accuracy can not be measured. How accurate is a measurement going to be against two system running a 3.2 kernel and one running a 3.5 kernel? How about XOrg 7.6 versus 7.7?

Why not just benchmark individual software packages to show the difference between Kernel 3.2.29 and Kernel 3.5.4? That would be more accurate of measurements than systems across the board with vastly different packages and builds.

You're basically trying to bring bias into a system where bias shouldn't be attempting to influence user levels of distributions rather than letting users decide how well founded a distribution is and how well managed and manageable the system is overall.

Because Linux allows the user of the system various levels of customization, there is no accurate way to say which distribution is best for CAD, games, office work, servers, etc. It simply can not be done without introducing bias. each system can be effectively rebuilt to whatever purpose the users wants of it.

Benchmarking Linux distributions is what it is... utter and total bullshit.

Benchmarking different package versions of software... now that's a real comparison of abilities.
 
Old 10-01-2012, 11:38 PM   #59
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Rep: Reputation: 242Reputation: 242Reputation: 242
Again, I disagree in parts.

Comparing different versions of the same software package in the same system may make sense, but it can also make sens to find out which optimisation is useful for your task. The same software may perform differently depending not only on package version, but on different systems, and benchmarks helpt to identify to select the platform.

As I said (and you repeat): Of course, you can turn just about every Linux distro into anything optimised for any purpose. But depending on that very purose it takes a different amount of effort, often more than just installing a different OS or distro.

If you are not into exploring your OS for the sake of it, but just want to get a job done, benchmarks can be useful, and in a professional environment (specialised and general) benchmarks are used to compare different offerings. During RfPs, test installation are also exposed to attacks by the information security experts of the company or governmental organisation, too, so benchmarks are not everything. But they include load test and can reveal that soem systems are more robust under heavy load than others.

While you as a hobbyist may enjoy to recompile everything with your preferred optimisations, the vast majority will find that too much hassle, and in a professional environment this is just not acceptable and could even cause loss of warranty or vendor support, depending on the contract.

For instance, a home user wants to watch media streams, and have a responsive desktop. A distro that is optimised to run on servers may just feel sluggish in such a scenario, and the average Linux user today is not interested in re-compiling a kernel or even C libraries.

Your argument, that anything is possible with Linux, is correct, but irrelevant for most users. While *you* may have the skills and knowledge and the guts to rebuild your system as a whole, most users certainly want a system that satisfies their needs out-of-the-box as far as possible and with optimum performance.

So, if your point is that benchmarks are not suited to reveal the *potential* performance, then you are right.
But potential performance is irrelevant for users, can cannot really be measured. So this argument is true, but pointless.

gargamel
 
Old 10-02-2012, 12:21 AM   #60
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
Yes, but as I said, even if one distribution can do one thing faster doesn't mean the overall system is going to be more stable, reliable, and even then, more or less user-friendly. Newer and faster versions of software often still have bugs, security risks, and even instabilities with other packages that may or may not make them overall better in the long run.

Distributions like ArchLinux and Gentoo which use Rolling Release updates straight from the package developers often end up with tons of patches applied from the upstream just to quell the torrent of bugs than show up in their packages.

Gentoo and ArchLinux have some of the highest usages of bug and stability patches outside of Red Hat, Debian, and Ubuntu. Why? Because they use packages that haven't been thoroughly tested against other packages in the system. Some of these distributions have packages with as many as 20 separate patches just for one single piece of software because every other dependency used to crank up that much more speed caused something else in the code and the execution to break.

Slackware by far has some of the lowest uses of patches outside of LinuxFromScratch. Why? Even in Slackware's -Current releases all packages have to be extensively tested before even making it out of Patrick's private -Testing branch. Some packages make it out without patches, but if a package just requires too many patches to even become stable, often it isn't worth it. people saying Slackware's -Current tree is a rolling release... far from it. Current is far from being rolling release.

Look how long it took to get Xfce updated. Why? Because of too many instabilities in 4.8 and too many usages of dependencies that required even more levels of testing. In the end after 4.10 was released and enough testing was accomplished with the dependencies and enough stability was found, it got rolled out.

While it is nicer to promote a faster system in the books and on paper to make it look appealing to new users, it can just as easily drive them away when they try it out and suddenly they end up using a broken mess of half-assed software. I've used some of these so-called faster system like ArchLinux and I've found them to have great levels of speed, but with a severe cost to it's stability overall, and while ArchLinux does have a -testing branch for it's software, it's testing branch does NOT have the same level of quality assurance that a distribution like Slackware does, nor do all the packages made for Arch go through the same level of scrutiny. As I said, Arch is fast as lightning, but many times, I've found Arch to be a huge unstable pile of shit at times. I've found Ubuntu to often be so under-documented that users can get lost trying to figure out the system, even though it looks easy. And I've found the hardest distributions to use like LFS and Slackware to wind up being the most stable and reliable of all the distributions because enough care went into the system to avoid the pitfalls of trying to be the poshest distribution on the block catering to an audience that is only looking for something easy when nothing is fast and easy.

I often associate the terms fast and easy with a slut. And I don't need a slutty Linux distribution.
 
1 members found this post helpful.
  


Reply

Tags
benchmarking, performance



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Lightest "jukebox" app/player for Debian? lite compared to rhytmbox/amarok? linus72 Linux - General 4 11-02-2009 08:55 PM
linux distribution supporting "2.6.18-8.1.10.el5" or "2.6.16.21-0.8-smp" mrpc_cambodia Linux - Kernel 3 10-08-2009 02:43 AM
Shouldn't "Slackware64" Become just "Slackware" and 32-bit Become "Slackware32"? foodown Slackware 6 06-23-2009 01:24 PM
Slackware = "most unix-like" distribution ? caustic386 Slackware 41 04-08-2009 08:26 PM
Poor 3d performance with ATI Radeon 7500LE desipte "Direct rendering enabled" tallman Linux - Hardware 5 06-16-2004 11:31 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 04:21 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration