Thread limits in dynamic libraries as opposed to static libraries
As a preface, I'm well aware that I'm attempting to create an unusually large number of threads. I work on a high performance algorithm that works best with huge numbers of threads running on high end hardware.
The issue I am seeing has to do with the number of threads I can have open at once on a 32-bit process running in a 64-bit Linux installation. I see the same exact behavior on both Ubuntu 10.10 and Red Hat Enterprise Linux 5.5.
In my testing, I create a statically linked function that creates threads using pthread_create() and a stack size of 16k. This test allows me to create slightly under 32k threads (varies upon the run) before I get a failure of type ENOMEM. However, if I copy/paste the same exact function into one of my shared libraries and call it, I am only able to create just under 512 threads (also varies slightly) before I get a failure code of EAGAIN. The failure of EAGAIN is very consistent and does not vary significantly with available memory resources.
Are there any specific limitations imposed by using shared libraries? The algorithm I use tends to peak at about 2-3k threads. To be able to set a parameter to work around this issue would be ideal.