ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a situation where a function pointer in a data structure might point to one of two kinds of functions - a function that takes no arguments, or one that takes an integer argument.
I can think of three ways to handle this...
1. Use two function pointer members with different prototypes, one for each type of function. This seems robust (in that compile would probably fail if I referenced the wrong function type), but also a bit ugly and hackish.
2. Give all the functions an integer argument, but do nothing with it unless required. This seems like a terrible hack to me.
3. Use the ellipsis operator, i.e. int (*method)(...); This would be less hackish (assuming I understand it correctly), but it looks like it might introduce runtime bugs if I mess up.
Are there any better ways? If not, which of the above makes the most sense?
Edit: functions that take an argument and never use it don't seem to generate warnings in GCC (with -Wall). This still strikes me as terribly crude though.
Also it looks suboptimal from an efficiency standpoint. My code is not required to perform well, but it should be at least somewhat efficient... And if I have a function taking an integer argument and doing nothing with it, isn't it copying the integer and then throwing it out?
If so, can the compiler optimize out such inefficiencies? Moreover, is it reasonable to depend on the compiler to do that, or should I be handling things in some other, smarter way?
Last edited by Gullible Jones; 05-04-2013 at 09:35 PM.
> I have a situation where a function pointer in a data structure might point to one of two kinds of functions - a function that takes no arguments, or one that takes an integer argument.
Methinks here is the problem; every structure member (including function pointers) should have one well defined meaning, not 'sometimes this, sometime that'.
PS: functions might ignore their parameters (for example argc and and argv is quite often ignored in minimal test-programs' main function); gcc-option -Wextra warns about that; use __attribute__ ((unused)) to suppress warning.
Also it looks suboptimal from an efficiency standpoint. My code is not required to perform well, but it should be at least somewhat efficient... And if I have a function taking an integer argument and doing nothing with it, isn't it copying the integer and then throwing it out?
That's really not the kind of thing to worry about in terms of efficiency. Passing an integer will be 2 instructions (push & pop) for a stack based calling convention (x86), and just a single instruction (setting a register) for register based calling conversion (x86-64). If you hadn't said your code "is not required to perform well" it might be (probably not) worth worrying about the cost of function pointers vs direct calls, but the gain in maintainability and flexibility almost certainly outweighs the tiny performance hit.
Quote:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil
Regarding the cleanliness of missing/extra arguments, fcntl and open both have an optional third argument. Given the value of the second argument, it only makes sense to look for the third argument in certain cases (e.g. F_SET* and O_CREAT.)
How is it that you can use the same code to call both types of function? How do you have an argument to pass in some cases and not in others, and how do you know when it should be passed to the function? Regardless, you might as well cast them all to void(*)(void) and cast them to the appropriate function type before calling them, since there seems to be other information that allows you to determine the prototype.
The immediate thought that pops into my mind here is: "hey, we've already got high-level programming languages that know all about that sort of thing." You can define multiple methods to an object, which vary according to the parameter-lists that they take, and any impedance-mismatch problems will be dutifully detected at compile time.
If you do need to use function pointers to get at a particular subroutine, I recommend that you either use two separate pointers (to two separate subroutines), or that you devise the routine ... in whatever programming language you are working with ... to accept a self-describing data structure such as a hash-table as its (perhaps, only) actual input parameter.
Really, though, you do want to find a way to resolve the potential issues with doing this sort of thing, at compile-time. Not at runtime. You are definitely "riding the pony bare-back and without spurs or even a hat" if you do this in C. This requirement is actually quite common, and it has been implemented in the design of a number of mainstream languages.
if you must use function pointers.
They are especially good for making your code unreadable and harder to debug.
Worrying about trifling perceived efficiencies is a waste of time in reality.
maintainable, simple code is much much wiser.
Donald Knuth - “premature optimization is the root of all evil,”
Code:
There is no doubt that the grail of efficiency leads to abuse.
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of
noncritical parts of their programs, and these attempts at efficiency actually have a strong
negative impact when debugging and maintenance are considered. We should forget about
small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Last edited by bigearsbilly; 05-17-2013 at 09:26 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.