Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I was wondering... Given the amount of different patchsets for low latency, real time and performance improvements in general, what would be a good way to measure the performance each of these bring to the table? Like the Con Kolivas series of patches, both for the desktop and servers. It's rather important to also note that requirements in both these segments are rather different, as needs and system tasks are also different.
Specifically, what would a good methodology be to test the kernel for different tasks, for instance like every day desktop performance, or graphics or media workstation performance, or a real time system performance; or why not, gaming systems? Linux can accomodate to all these tasks, but it will heavily depend on extra features and configuration options of the kernel. So I was thinking of a more "standard" way to test for performance, and here I mean real world performance, not syntetic tests only. Conditions which might push the system and stress those areas which would be tested, like I/O response and throughput (I'm thinking of running several database queries plus table updates and submitions would be a good way to test these). But I'm lost for stuff like low latency for destkop use and the like, as there are many other variables which would play a major role (graphics card drivers [proprietary/OSS drivers]), motherboard brands and chipsets, memory timings, brand, etc, etc.
I'm not saying something like SysSoft Sandra or anything like that synthetic cr*p. I'm talking about standard applications and situations which would push the system to a point where the improvemens of these additional patches would show (over a vanilla kernel), like for instance a high memory utilization scenario in a laptop to test swap prefetch, etc. That kind of thing.
We Linuxers are famous to push our systems to new limits, and even though the kernel has scaled up rather nice to these reqs, I was thinking of a way to actually test for this
By the way, is there any way to test memory throughput other than memtest86+? I ask, because some times it is rather useful to test for memory speed, but not necesarily memory consistency or faulty memory, but raw throughput.
Because of the versatile support that Linux on different architecture (alpha, arm, i386, ia64, m68k, mips, ppc, sh, sparc et. al.) on different platforms, there cannot be one single performance measurement. It will vary among platforms, compilers, platforms, processors/SoCs, protocols etc.
The way I look at it is... just try different options in the kernel. Stick with what feels the fastest to you. The thing that bugged me was, when I swiched from a 2.4.x kernel to the 2.6 kernel when it was way back in beta, there was a MAJOR difference in the feel of the performance, but now it feels as if it's gone away, and I'm not sure why. I believe what made the most differnece for me as the end user was the low latency stuff. Anyway, there's no real way to test exactly how much it improves performance, and there are different types of performance considering computers are used for different things. For example, a server wouldn't run good with the 1000hz timer interrupts option, where as a desktop would "feel" like it was running faster with that (although it's technically not). All that really matters if it's a desktop system is how the system feels to the user. Aside from the kernel, what I like to do, and what really does make a difference in performance as far as I can see, is compiling my own stuff with optimizations that fit the way the program works, and my system. For example, I compile something like X, which is running almost all the time, at -O3, because I want it to run the code as fast as possible (not to the point of making it unstable though). But on a different note, I compile Firefox at -Os because I don't care as much about how fast it "runs" because it doesn't do that much processing... it is however, a pretty big program, so if I compile at -Os, it makes the binaries smaller, meaning that the disk has to read less stuff, and it will open up much faster, which is really what I care about when it comes to Firefox. That stuff is hard to measure what the performance improvements actually are though. I suppose you could just take a stopwatch and time how fast it takes for Firefox to startup, compile it again with some different options and see how fast it starts up, but also make sure that it runs fast enough to be useable once open, same with all the other programs on your system. Basic rule for me is that things that I open and close alot, I compile at -Os, while things that I know are running almost all the time like X, deamons, fluxbox, gaim, etc, I compile at -O3 so that once they're up and running, they run really fast (but don't start that fast), and everything else I tend to compile at -O2. You can't measure all of that with a stopwatch, espeically the part where when programs are actually running how responsive they are, but just see if they feel any different.
I was thinking more of actual processor time than wall-clock time. I do have one "test" that makes even the most ambitious kernel and auxiliary programs optimizations tremble: Starcraft. As strange as it sounds it is my preferred benchmark for desktop systems running some optimzed kernel. With a nice Wine and X optimizations too... It is kind of ridiculous that this one game has the ability to tell me if I did something wrong by animating sprites and mixing lots of sounds, but it does. When it runs as it would in Windows® (speedy screen draw, acurate input, etc) for the rest the system runs very, very smoothly.
I was thinking more along the lines of compiling a "list" of demanding applications and some performance numbers to compare against.
I too remember back in the days when 2.6 was released (or about to be released) that peformance felt much faster than 2.4 and that you could tweak 2.4 to perform almost as "fast" as 2.6. Again, for desktop systems "speed is in the eyes of the beholder", however I wonder why has made some big names in the entertaining and special effects industry to use Linux instead of FreeBSD or other *nix based OS, and what are their claims in terms of performance are based up on?