ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I fond so many good tips here that I hope someone will be able to help me with my problem also.
I am currently programming a suite of daemons in c++ on CentOS 64.
More or less all those daemons are SOAP servers, with the SOAP part handled by gSOAP.
One of the daemons is a monitor sitting on a certain server and more or less just checking if the other daemons are up and running - but also offering a SOAP interface to get the 'current situation' on the machine.
Whenever the detected situation does not correspond to the defined (i.E. a process which should be is not running) it is started up.
This part, I implemented via a system() call that starts the other process.
I was a little confused when lately often this monitoring process would not start because it could not bind its server port - the one the SOAP server is listening on.
And confusion got bigger when netstat told me that the port is assigned to one of the processes that were before started by the monitor..?
Even better: if I then kill the process of which netstat tells me it holds the port, the next netstat shows the next monitor-started process as the one listening on the port..
Only if I kill all the processes that were started by the monitor, the port is released and I am able to launch the monitor again.
Now I wonder: why does this port get 'reassigned' to the children - and most of all: how can I prevent this from happening?
It's not some much that they get reassigned to the children, it's that the children have it open as well. I'm guessing you opened the socket then forked the children, just close the socket in the children before doing anything else with the child process
There is no fork in the monitor - I just call the system() function with the command that starts the child, plus a "&" to have it detached from the shell.
Is there a way to get the children really started 'from scratch' - meaning without them 'having the port open as well' ?
Just like if they are started from command line (which I thought would be the case when using the system() command..)
you could use a fork/exec instead of that system call, then you'd be able to set the behavior you'd like, otherwise I'd guess you could set the close on exec flag on the socket (using fcntl), that'd probably work as well
jiml8 - you are perfectly right, of course: system() IS also a fork.
Well, I now tried a fork / execv combination, anyway, but I'm still as far as I was before.
What I did was to first fork, then close the server in the child process, then replace the process image by execv.
Unfortunately this didn't work out, as the server terminated in the parent as well (it is implemented as a singleton, guess that's the reason).
I probably really have to somehow close this socket, either in the child process after the fork, or in the startup of the other programs.
The first seems preferrable to me - the second way would require every program that gets started by the monitor to have these closing machanisms included.
But there I still have a problem understanding how this works. The fork will 'duplicate' the process, apparently including this listening socket.
To close this socket, I need to have its file descriptor - which has also been duplicated and therefore is not the original one anymore.
So the question would be: how do I, after the fork, find out which file descriptor I have to close in the child process line ?
Or is there some 'close all' command that would help here?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.