Your redirection of output will be done in the child processes, yes? In that case, the limit problem is not the one you think it is.
There is no maximum number of file descriptors per "user" (i.e., root), or even per process group. So that's not your limit.
The limit problem is the number of processes you can fork. This is not a bug, it's a feature. It's meant to eliminate the possibility that a buggy program can get stuck in an infinite loop containing a fork() spree.
To examine this more closely, try this at the shell prompt:
Code:
ulimit -a
ulimit -u
On my system, this shows the max user processes to be 1024. I don't know whether this is the maximum processes a given user may have (no matter how spread out among parent processes), or how many fork()s may be done, or how many forks() may be done, minus one for the parent process of those fork()s. Unless someone can jump in here with the answer, you get to experiment.
I'm thinking that if you're going to use anywhere near, say, 1024 processes, you may wish to try something other than figuring out how many processes you can have, taking a deep breath, and just doing 'em all. You may wish to check each process to see whether it starts successfully before starting the next one. This provides for more robust code. It will run a little slower, but the payoff is tremendous in isolating problems that can arise either in initial debugging or in sudden changes in availability of system resources.
You might want to consider a sort of handshake system, using file record locking or something, for each process to tell the parent that it has started ok and has succeeded in getting the resources it needs.
Hope this helps a little.