ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Looking at my program ive noticed that im extensivly useing alot of functions to split the program up and im wonding if im incuring a performance hit because of the number of functions im calling from other functions. Is function overhead still a problem or does GCC and other moden compilers allevate this ?
Also does the number of file descriptors count as a program is run or does the Operating system get one back when the stream is closed ?
In general, the compiler makes decisions about how to reduce function call overhead. You can force the issue with inline function calls, which ask the compiler to inline the function code.
You shouldn't worry about performance unless you know there is an issue. For example, have you profiled your code and found a big bottleneck? or are you running in an arm environment where resources are very limited?
File descriptors are "counted" by the kernel when a file is opened or closed. COuld you tell us what you mean by "get one back"?
"Premature optimization is the root of all evil." If the code works elegantly and understandably with all the function calls in, best to keep it that way until you know you've got a problem.
File descriptors are issued on a per-process basis. Whenever you close a file in your program, it's descriptor becomes available for use again, so as long as you close files when you're done with them you should be OK. (Unless you are expecting to read from a million different files simultaneously.)
Ahh ok I wasent really worried that my program was slow at all considering its rather basic in its function and nasty in it's current form. Its going to be rewritten but for a passing intrest I was just curious if people really worried about it anymore with C code. As for the file descripters yes thank you and no im not opening millions of files just the same one over and over again. (this is part of the nasty part )
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.