Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
This isn't exactly how I'm planning to use it, just an example, but just if I applied it to a function I'm expecting return 0 in pretty much all cases unless something really goes wrong.
A discussion about these conditional branching mechanisms is interesting. I have read many of the macro's to unravel 'what the intention is'. I still don't see it very clearly. I saw a double explanation point in front of a conditional. "!!(cond)" I'm not sure of this usage, but curious.
So unlikely is a nice way to cut down on how the kernel executes certain tests that you doubt will fail.
[...]
But how efficient is it if I want to test the execution of a certain function?
This depends on how often that code is executed and on how well the compiler can optimize your code.
You'll have to measure.
So unlikely is a nice way to cut down on how the kernel executes certain tests that you doubt will fail.
I believe it is not the test itself, but the code executed after the test that can be optimized.
The test itself needs to be executed regardless of the low probability of a true result. But the compiler could change the code that would be executed after true is seen to optimize the typical flow:
1) The simple and obvious optimization is to put that code out of line from the main flow, hopefully in a different cache line. So the code that isn't normally executed won't be read into the cache and won't push more useful code out of the cache.
2) I don't think GCC does this (but I think it ought to): Within sections of code whose execution is unlikely, regardless of any overall optimization setting biasing in favor of speed over space, there should be a local optimization of space over speed. Simply put, compile the unlikely to be executed code into as small binary as possible to further minimize its impact on the cache, on virtual memory paging, etc.
Quote:
say I did it like this-
I'm not 100% sure, but I don't think the difference between your two versions of that code would matter at all to the optimizer.
Burying the assignment inside the test makes the code harder to read, harder to maintain and harder to debug. If there is any useful information handed to the optimizer by burying the assignment that way, I expect the optimizer would have found the same information by itself from the more readable form.
The compiler has the possibility to 'not branch' for the 'likely' case, and then to 'branch' for the 'unlikely' case.
For the 'likely' case, no branch is taken and the following instructions have probably already been fetched by the bus processor look-ahead/prefetch. I think the prefetch advantage is in addition to the cache optimization John discussed.
Very nice idea, even if the compiler hasn't taken advantage of it yet.
Quote:
The purpose of this idiom is to convert from an integer to a boolean with a deterministic 'true' value, i.e., any non-zero value results in 1.
It is necessary that the first "!(cond)" convert the integer to a boolean with the same assumption that the last/outside expression avoids.
Burying the assignment inside the test makes the code harder to read, harder to maintain and harder to debug. If there is any useful information handed to the optimizer by burying the assignment that way, I expect the optimizer would have found the same information by itself from the more readable form.
I think for the purposes of my module this is the best solution.
However, I was mostly using the assignment as an example, such cases might exist where I think burying it is more readable. Something quick that comes to mind is if(isalpha(c)). I feel like the readability of that better describes the operation rather than assigning the result of isalpha() to a variable then testing the variable.
Either way, an interesting discussion so far on optimizing.
such cases might exist where I think burying it is more readable. Something quick that comes to mind is if(isalpha(c)).
I wasn't objecting to a function call inside a condition. That's ordinary programming. I almost never prefer adding an extra bool variable over directly testing the condition where you care about it.
I objected to unnecessarily burying the assignment operator inside the condition.
I also do bury a lot of assignment operators inside conditions, but only when the flow of code would be broken up if you didn't, for example:
Code:
if ( pnt == 0
|| (x = pnt->bar) != 0 )
{
...
}
Depending on surrounding code and other details I left out, it might be very messy to pull the assignment operator out of the middle of the condition.
Then I try to minimize the inherent unreadability by making very clear the '=' wasn't a typo for '==' and with a line break to highlight the extra operation.
I wasn't objecting to a function call inside a condition. That's ordinary programming. I almost never prefer adding an extra bool variable over directly testing the condition where you care about it.
I objected to unnecessarily burying the assignment operator inside the condition.
Yeah I figured that was the case, just thought I'd say something explicitly to keep the discussion going since in the original post I wasn't so concerned with the assignment as I was about how the optimizer treated the function execution were it placed inside the unlikely().
in the original post I wasn't so concerned with the assignment as I was about how the optimizer treated the function execution were it placed inside the unlikely().
Probably you already understand from what I said before. I'm pretty sure it will make no difference to the optimizer whether the function call is inside the unlikely() vs. before it.
I tend to think about the optimizer reaction first. After I decide a choice will make no difference to the optimizer, then I try to arrange it for maximum maintainability.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.