ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Floating point calculations, Pi to the Nth digit seems a likely choice.
CPU performance is dependent on a lot of variables your not considering here, but that would give you some base numbers on processing capacity.
Actually, it is very difficult if not impossible for any user-mode program to calculate the "performance of" any CPU, because the process is at the mercy of the operating system. Furthermore, "the presence of the experimenter changes the experiment."
Generally speaking, I suggest that the best way to measure such things is by statistical methods. Run an experiment, under different real-world conditions judged to be representative. Collect several hundred or thousand data-points. Then, offline-analyze them.
One strategy that I find useful is to establish some kind of "definite goal." Such as, "at least 95% of all requests must complete within X milliseconds, and the standard-deviation of the completion times must be less than Y." Or: "There must be at least 98% probability that all requests completed with any 3-second sample window will be completed." And so on. Choose goals that are meaningful to the people, or to the companies or workgroups, that are interested in using the computing-machine, and be certain that you can provide results which are relevant to: "I want to use the computing-machine to reliably handle this duty for me."
Take samples in all sorts of representative conditions, taking careful note of the conditions that prevailed when each sample was taken. "Sometimes, the sample has the system all to itself. Sometimes, the system's getting slammed."
My counter to that might be: "well, who cares about MFLOPS [Millions of FLoating-point Operations per Second], anyway?"
This rarefied statistic is only of interest if your hardware environment is so##CLASSIFIED## exotic that you can realistically expect your job to predictably, actually have(!) access to the hardware, without any sort of competition, for a significant amount of time.
... and if you do, what you actually wind up measuring is "the overhead of the operating system."
If your hardware environment is realistic, then there are so many other factors ... factors which might well consume milli-seconds, and do so "unpredictably(!!)" ... that: "MFLOPS ... just ... don't ... matter!"
Thus, my admonition to use classical statistical measuring techniques. "You are 'taking a sample,' and you are doing it many times, under real world(!) conditions."
Furthermore, "what you are sampling," just like "the result-statistic that you intend to produce," is strictly focused upon "the payoff" of the experiment process that you are observing.
Instead of "fixating on 'MFLOPS, or what-have-you,'" and therefore being totally lead-around-by-the-nose by their (inevitable ...) unpredictability, you are doing precisely the opposite: you are constructing a test which seeks to extinguish the influence of "that unpredictability, along with every other 'environmental factor.'"
"Just sample the Black Box.™ If you do this a sufficient number of times, you don't have to care how the Black Box works."
(And, by the way, you can also predict just how many samples you need ...)
(And, also 'by the way,' that number is quite small.)
Testing raw CPU power alone is rarely very helpful, since any real-world application of that processing power would depend on many other factors as well, such as memory bandwidth/latency, I/O speed, etc.
For example, many years ago I had a machine that ran process X in Y minutes. A few years later I was tasked with building a replacement machine that would run process X faster. I ran some experiments and built a machine that actually had a _significantly_ slower clock speed (on the order of 20%), with an associated 10-15% reduction in raw CPU performance, but it had a MUCH faster memory interface, and it ended up running process X quite a bit faster than the previous machine. Process X was actually being bottlenecked by the memory bandwidth on the older system, and opening that up sped up the process significantly, even though the CPU was slower.
Similar stories can be told for cache size differences, hard drive random/sequential access differences, etc.
In order for a performance metric to be of _any_ use, it needs to measure what you're actually going to use the computer for, rather than some random, unrelated, number crunching algorithm.
Which short C code would do a good test for CPU performances / benchmarking.
Indeed, despite its deceptive simplicity writing a reliable CPU-RAM sub-system benchmarker is not trivial, at least I know no such tool, open-sourced.
Yet, my proposition is to try Knight-Tour's simplistic mainloop - it generates billions of lines 128 bytes long - unique of course, which is a nice bonus output helping in testing special cases of hashing and searching. If you want to play with my KT-dumper it is attached, its main loop looks like (MinGW used):
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.