LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Blogs > Unpopular Positions: One Geek's Take
User Name
Password

Notices


A space to ponder, discuss, speculate, disagree, and learn.

Expected topics include:
  • Technology
  • Politics
  • Defense
  • Philosophy
  • Humanism and Transhumanism
  • The Future
Rate this Entry

Geekbench

Posted 02-05-2014 at 04:04 PM by ttk

Because none of us have infinite time and money to throw at our computer hardware, it's nice to be able to look at a benchmark and gauge whether it's worth it to spend a little extra cash at a more powerful CPU, or one with a different memory subsystem, or a solid state disk, etc.

For this, we have system benchmarks.

For a system benchmark to be useful to a person, it must measure the system's capability to perform the kinds of tasks that person needs their system to perform. Because of this, the best benchmarks consist of several components, each exercising the system's ability to solve a certain kind of problem. Thus the prudent seeker can look at the performance numbers for the components most like their expected workload, and ignore the others as irrelevant.

Of this kind of benchmark, SPECcpu is probably the best, particularly for server type systems. It exercises sets of integer-intensive and floating-point-intensive components based on real-life solutions to popular problems, and it does so in two contexts: serial (running in one core on one processor) and parallel ("rate", running on as many cores and processors as the system has).

SPECcpu is wonderful. It is awesome. It has shortcomings:
  • SPECcpu is for vendors to show off their most powerful systems. Because of this, only expensive, high-end systems get their benchmark results published.

  • SPECcpu takes pains to exercise just the CPU and memory subsystems in isolation from I/O devices such as disks. In the real world, disk performance often has an impact on actual system performance, but SPECcpu cannot reflect this influence.

  • SPECcpu published results generally represent the performance of aggressively optimized systems. Vendors are allowed to run the benchmarks, profile them to see where they are slow, tweak their system, and then run the benchmark again, repeating this cycle however many times they need to get the best possible results. The published results represent only the run on the most-optimized system configuration. Vendors will recompile their kernel, hand-tweak standard libraries, customize compiler optimization passes, and so on, to achieve the results which make their system look as good as possible.

That may be fine for the enterprise, where multibillion-dollar corporations can throw nontrivial resources at optimizing their production servers, but where does that leave the rest of us?

Small and mid-sized companies, universities, and enthusiastic amateurs typically don't have the time, budget, or expertise to hyper-optimize the most expensive hardware, and run in a pure RAMdisk environment. SPECcpu results do not represent the performance such people can expect out of their systems.

I would say they need a "benchmark for the rest of us", but that's not quite true. Professional system administrators and engineers are not "the rest of us". Neither is the home enthusiast mining dogecoin on a bookshelf full of second-hand laptops.

We need a benchmark for geeks. Hence, Geekbench.

Like SPECcpu, Geekbench consists of several components, each representing a common type of workload.

Unlike SPECcpu, Geekbench is for benchmarking all kinds of systems, from the high-end to the low. It uses whatever operating system, compiler, libraries, etc are installed on the tested system. It allows the disks and filesystem to impose themselves upon the workload.

Also unlike SPECcpu, Geekbench automatically uploads the results of a run to a central server, along with a description of the system (including its hardware and software), to be stored in a database. A website provides a public front-end to that database, so people can aggregate, segment, filter, and view benchmark results depending on their needs.

Thus it is hoped that Geekbench will measure the performance of more kinds of systems, with more kinds of subsystems (cpu, memory, disk, filesystem, and system software), and represent enough samples to provide a distribution curve of expected performance.

While it is certainly possible for bad actors to game the system and submit bogus results, the expectation is that many more people will submit good results than bad ones, and the latter can be filtered out as outliers.

That's the idea, but in practice I haven't been giving the project the time it requires. Geekbench has gone through a few revisions, using mostly microbenchmarks (which are not really what it needs).

It does automatically upload its results to my server, but I never wrote the web front-end to allow users to actually see the collected results.

Geekbench has languished. I haven't worked on it for years. It's still a good idea that I'd like to see happen, but I don't know when or if I'll get around to getting it in order.
Posted in Technology
Views 1710 Comments 0
« Prev     Main     Next »
Total Comments 0

Comments

 

  



All times are GMT -5. The time now is 06:45 AM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration