-   Linux - Newbie (
-   -   What if we run out of pid's? (

antriksh 02-07-2013 11:45 PM

What if we run out of pid's?
Hi..This is a hypothetical question. As we know that we can create a limited number of pids.


cat /proc/sys/kernel/pid_max
That gives 32768 in my system. So that means we can create a max of 32768 in my system. My question is what will happen if suppose all the pids have been used and a new process is forked? So will it replace any existing process? 02-08-2013 12:36 AM


1. Your system will be dead by that time! :)
2. Yes it will use the PID which were killed/terminated/stopped gracefully.

With general high level server usage, i haven never seen this happening.

Unless you fork to infinite.


shivaa 02-08-2013 12:57 AM

Lots of documentation is available on this topic. PIDs are recycled and whenever a process gets terminated, it's PID get assigned to other one if waiting.
Concept of paging also comes into the picture, when talking about PIDs allocation.

1. Process creation in UNIX
2. Process identifier
3. How are PID's generated?

antriksh 02-08-2013 01:18 AM

No..I know that PID's get recycled and unused PID will be used. But my question is as i said this is a hypothetical question how the kernel is designed to handle this situation?
We know similar scenario where system runs out of memory and then OOM killer was invoked by kernel and it kills some specific processes based on some algo but do we have any such thing for PID's out of stock?

jpollard 02-08-2013 08:21 AM

If you have 32768 active pids in any Intel or ARM based system, it is hung.

But you CAN change it:

# cd /proc/sys/kernel
# cat pid_max
# echo 50000 >pid_max
# cat pid_max

Which leads me to believe that the /etc/sysctl.conf file can be used to set the value at boot time.

Should have included a reference:

There it indicates the maximum value is that of the size of pid_t type (from an include file) has 2147483647 for a maximum (I think this is the int arch limit), as pid_t is defined to be an int.

There ARE a few places being more restrictive: /usr/include/asm/posix_types_32.h defines it as a short (the 32768 maximum) for __kernel_ipc_pid_t... so some problems may exist for those processes using shared memory on 32 bit Linux kernels. For all 64 bit, it uses an int (though I don't know why it isn't an unsigned int). The generic case all use "int" for the base (the 2147483647 value)

allend 02-09-2013 01:07 AM

The situation of running out of pid's can be triggered by a fork bomb. This is a good article that also discusses prevention by the use of ulimit.

jpollard 02-09-2013 01:17 PM

Good fork bombs are hard to kill (they ignore SIGHUP, and all the other signals - execpt SIGKILL (signal 9).

But there are ways... The most reliable is to:

1) increase your priority ("renice -5 $$" where $$ is the shell variable that identifies the running shell itself). This gives the administrator priority over the fork bomb when it comes to running.
2) reduce the priority of all processes owned by the user that started the bomb. This will take several passes of "renice 20 -u <user>" to accomplish that. Then you can do a killall -u <user>.

The best way to prevent them is to impose reasonable limits on the user logins (set their logins ulimits to 5000; this is more than enough for nearly any normal server, though if you have more than 5000 CPUs then up the limit some if they are doing large MPI jobs).

All times are GMT -5. The time now is 06:16 AM.