how to determinine the appropriate value for fs.file-max
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Listen are u sure that is not the user issue with limits. I had the same regarding some application and the problems was that the user was limited to for example 1024 and that users need it more, due to sockets and so on.
For example java applications like tomcat or jboss sometimes needs more open files, by default every user is limited to 1024, so modifying this to lets say 4096 would actually do the trick.
In rest the fs-max I dont think has anything to do with your application asterix, i think the user with which you run this application need to have bigger limits.
Distribution: Centos 7 x86_64 , Rocky Linux 8 (aarch64)
Posts: 196
Original Poster
Rep:
aoa,
Problem still persist.
I have increased the limit and even rebooted the machine.
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
max nice (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 32758
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 32758
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Entered the following enty
* soft nofile 4096
* hard nofile 4096
in
/etc/security/limits.conf
Now pictures is as below
Aserisk is running by root user
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COM
2620 root 15 0 106m 37m 4564 S 32 1.9 12:53.70 asterisk
#ls -l /proc/2620/fd/ |wc -l
1023
#lsof -u root |wc -l
4001
Still facing error
Feb 19 15:16:46 NOTICE[2624] manager.c: Accept returned -1: Too many open files
Feb 19 15:16:46 NOTICE[2624] manager.c: Accept returned -1: Too many open files
Feb 19 15:16:46 NOTICE[2624] manager.c: Accept returned -1: Too many open files
One thing which I am suspecting is that I have set the file open limit size to 4096.
and previous command #lsof -u root |wc -l
4000
indicating that 4000 files are opened by root user.
What maximum limit can I set for ulimit -n ?, mean what parameter memory , files system inodes etc need to be consider before setting its value too HIGH.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.