ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
hi all,
i have divided my program into modules.in the main program i used functions
that are defined in cpp files (eg:function1.cpp,function2.cpp........)and in the main program i have for example something like this:
Code:
int main(){
int i=function1();//use function1 from function1.cpp
bool is_ok=function2();//use function2 from function2.cpp
//....................
return 0;
}
so how can i tell to the main program to call the functions that each one exists in its cpp file.
i heard that, that can be done by making my program.
any ideas or suggestions are welcom
Normally the way one does this is by putting the prototypes for functions in header (.h) files, which are then included (#include "header.h") in any file wanting to call those functions. So you'd have function1.h and function2.h (which include the prototypes, i.e. "void foo(int bar);") and then function1.cpp and function2.cpp (which have the real definitions, i.e. "void foo(int bar) { return bar; }"). You function1.cpp and function2.cpp have to #include their respective headers as well. Then in your main file you use the include macro ("#include "function1.h"" and "#include "function2.h"" on separate lines) and then compile all the source (.cpp) files together into one binary. Or, compile each .cpp file individually into an object file, then link all the object files together (again, just use g++ with the file names) into a single binary.
When doing this it may help to remember that g++ is a single-pass compiler, which means that it has to know everything about your source code in a linear manner. That means that you have to give a prototype for a function before you can (safely) use it. With this method, you aren't really telling your program what unique file that function exists in; you're just telling it how to pass parameters and what to expect back. Which function actually gets executed (and or more correctly, how the function gets executed) is determined at linking time, when all dependencies are uniquely and finally resolved.
The process of "making" a program is more general, and usually refers to using a Makefile along with the Make program to generate a binary. A Makefile is like a recipe for building a complex program based on dependencies. More info can be found by doing an internet search for "GNU Make".
Hi adilturbo, is there a reason you want to have the function separated from the main cpp file? For an assignment, fun/tinkering or something? I am just asking since it looks like you just want one function in each file.
hi elyk1212,i'm trying to devide my program into modules ,because it is adviced to do,why?
suppose u have a big project with handreds of code lines,so while compiling u are getting handreds of errors so how can u correct those errors , but when u have modules the work is easier, i mean debuging and testing a single function,so you can easly use a driver for each function,which means writing a short auxiliry program whose purpose is to provide the necessary input for the function,call it,and evaluating the result.
so by using drivers each function can be isolated,and studied by itself,and therby errors can often be spotted quickly.
divide and conquer is the best solution for complex problems.
hi taylor_venable,i am going to try with the ideas u gave me and answer you.
You can see this sort of thing throughout, say, the Linux source code, where related subroutines are put into separate modules; but they are grouped logically; not "one file per routine."
In the C language, file-boundaries are significant because "private" routines and variables can be declared which are visible only to other routines in the same source-file.
You're right, it is always best (for me too, anyhow) to debug one functionality and function at a time. However, I usually have the functions I am testing within the same file (if appropriate, e.g. similar functionality). This is especially the case, if I am using an OO approach.
There are definitely cases I could see having many separate files (do so all the time), however, I was asking what your design/requirement thoughts were since it appears you have only one function per file. It seems like a unique way to do things. I would likely use the different files to organize related tools, in sort of a hierarchal way. But, even if these singular functions are several thousands of lines, debugging should be no problem if contained within a file with related functions and utilities. You would just do as you were mentioning, and isolate test cases to each function one at a time. Most compile time errors would be conveniently addressed by line number, making their resolution a 'trivial' task (as always, 'trivial' to any debugging is hopeful at best).
You can see this sort of thing throughout, say, the Linux source code, where related subroutines are put into separate modules; but they are grouped logically; not "one file per routine."
In the C language, file-boundaries are significant because "private" routines and variables can be declared which are visible only to other routines in the same source-file.
Yes, Yes, my point exactly. I also did not even think about the scope of the name resolutions, but this is also important as you will be 'extern'ing your brains out for constants, and any globals symbols you may have. Separate files where 'appropriate'.
It would be very tedious (and a huge pain) to have to type each source file individually. In that case you could use a shell glob (I don't think it's a very good idea, but you could: `gcc *.cpp -o main`), or use Make.
With Make, you have a couple of options: the first is to use a suffix rule like shown at http://www.gnu.org/software/make/man...l#Suffix-Rules (this may not be a good idea because it's rather inflexible) and the second is to simply list each file along with its prerequisites and the compilation rules that apply to it.
This last option is the best, because the Makefile (when carefully and correctly written) will ensure that a minimum number of compilation steps is performed to get the correct result. Plus you always know exactly what is going to be compiled and in what order. The downside is that you have to write a rule for each source file, but the upside is that you only have to write it once.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.