LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   how to increase the max open files of nginx master process (https://www.linuxquestions.org/questions/linux-server-73/how-to-increase-the-max-open-files-of-nginx-master-process-4175589660/)

hilou 09-18-2016 06:08 AM

how to increase the max open files of nginx master process
 
Hi All,

I know there is a config called: worker_rlimit_nofile which could change the max open files of nginx worker, but how to increase the master's ?

# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 773092
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 773092
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

nginx version:
/usr/local/nginx/sbin/nginx -v
nginx version: nginx/1.4.4

OS:
14.04.3 LTS, Trusty Tahr

worker config:
worker_rlimit_nofile 20480;

/proc/8389/limits:
Max open files 1024 4096 files

8389 is the master process id.

Nginx is not managed by Systemd, systemV instead.

Thank you in advance

jpollard 09-18-2016 09:48 AM

You really don't need to.

The task the master is doing is to receive connection requests - and then pass the connection to a worker.

The normal number of requests will be 0, and 1 when a connection request comes in. It only takes a few microseconds to pass that connection to a worker. The only other files open would be stdout/stderr and those would be redirected to a log file.

Additional requests will be queued for processing - but the master process can only handle one at a time.

The purpose for having many simultaneous connections for a worker is via multi-threading. There can be MANY threads active - but there would be only one connection to the client - and only a few more used to access data. The aggregate for the workers could be a large number (assuming multi-threaded). If each thread is an independent process, then having a huge number of open files would be a bit of a waste.

The general reason for 1024 open files as a limit is that it is also the limit of file descriptors the select system call can support. Epoll can handle more - but usualy the only thing that needs that many files open at a time would be a database server, not a web server.

hilou 09-19-2016 03:45 AM

@ Jpllard

Thank you for your comment. But I actually configured my nginx to use epoll, so I just want to know why the max open files is still 1024 and how could I increase it.

Thanks


All times are GMT -5. The time now is 09:49 PM.