Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Sometimes when I run bash scripts, it hangs for a long time, then outputs many errors starting at
/bin/bash: /bin/randomjoke: /bin/bash: bad interpreter: Too many open files in system
/bin/bash: cannot make pipes for command substitution: Too many open files in system
/bin/bash: cannot make pipes for command substitution: Too many open files in system
/bin/bash: cannot make pipes for command substitution: Too many open files in system
and then runs the script quite oddly.
What does Too many open files in system mean?
How can I make sure that doesn't happen?
It sounds like a program has spawned too many times. Check to see if you have tons and tons of some specific service or program loaded. I have had this happen to me, with Samba (and bad config files!) being the usual culprit. I just used killall smbd and killall nmbd and then the resources were freed up.
Great, now I can't even ssh in. And that pc doesn't have a monitor and keyboard, so I'm going to have to reboot.
Is there a way to remove the limit for spawning processes, after I do that?
You're on your own if you try this (I haven't done it) but the maximum number of open files is in the file
/proc/sys/fs/file-max
I believe that you could increase it by:
echo "65536" > /proc/sys/fs/file-max
Only root could do that, and it will not survive a reboot. Put it in one of the startup scripts if you want it to be persistent. But I'd say that it's just removing the symptoms of something else that's broken in your system.
What exactly is wrong?
Is it too many processes or too many io files open?
Or maybe it's something completely different?
Could I caude it to display a warning message when that number is near?
Could it be a bug in RedHat 7.3?
I get it in both my machines, both have the same things runnning except one has pppd and pppoe and the other has X. X, I noticed, starts about 30+ processes, while pppd and pppoe are two processes. Clearly, the X one has many more processees running.
Besides, file-max has 19659 in one of them, and I definitely don't have 19659 processes in any machine. It's 8192 in the other, that doesn't have over 50-100.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.