ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have an app that forks a child process on every connection. The child process will last a few seconds to a minute and then complete. I am using the wait API to clear up the resources. When I issue a ps -eH | grep *** it does not show the completed child processes so I am confident that they are closing successfully.
Anyways, after heavy usage (20,000 connections) my memory goes up to 90% and the OS starts using swap memory. This is according to the gnome system monitor. I also monitor the parent process which is the only process that always stays in memory. The memory for this process never increases.
I was told that once the child process completes all memory is freed up. So with this theory, the only way my used memory could steadily climb is if the parent process is leaking memory which as I stated above is not happening.
I am concerned that after a few days, I will use up all my swap memory and my application will crash. Are the child processes trully freeing up memory or is this just related to Linux caching stuff until it needs this memory.
Hi,
Since you are using wait(), all the resources should be freed automatically. But have you made sure that your child processes are not leaking any memory??? I guess your memory usage will go up if the child process is leaking memory. One more thing that I would like to add is why don't you go for threading? What you are doing is theoretically correct but see if you can implement the same with threads.
No, you need to free memory your self. Like if you have used malloc() inside child process, then don't forget to use free() before teminating the program.
Originally posted by Kumar No, you need to free memory your self. Like if you have used malloc() inside child process, then don't forget to use free() before teminating the program.
No, when process exits, all its memory will be freed up. U can have some memory leaks (kernel memory) if U donot handle its exit rightly ( I mean U need to call wait() for its pid or process SIGCLD signal from your parent process ), otherwise a record for the exited process keeps in the system process table (and this child process becomes a zombie).
Last edited by dimm_coder; 10-24-2003 at 06:06 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.