Help answer threads with 0 replies.
Go Back > Forums > Linux Forums > Linux - Newbie
User Name
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!


  Search this Thread
Old 12-25-2015, 05:09 PM   #1
LQ Newbie
Registered: Dec 2015
Posts: 1

Rep: Reputation: Disabled
Unhappy Linux:: SSH not working after increasing Open Files / File Descriptors (FD)


I am not able to ssh my ec2 instance [linux based] after I have executed following command.[Before that I am able to ssh to server]

# vim /etc/sysctl.conf
I have update file-max number to 4000000
fs.file-max = 4000000

Then I exit my ec2 insatnce & try to ssh again but of no luck.

I have tried ssh with -v option & what I get is only [debug1: Exit status 254].
Note: This is the only change I have made.
Old 12-25-2015, 06:11 PM   #2
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,714

Rep: Reputation: 1280Reputation: 1280Reputation: 1280Reputation: 1280Reputation: 1280Reputation: 1280Reputation: 1280Reputation: 1280Reputation: 1280
How much memory does the system have?

It takes memory allocate space for 4,000,000 possible files.

My 8GB system only allocates 815,155 possible files. You very possibly overloaded the system and it can't allocate memory for the process to to be able to open the necessary files.

You really shouldn't need to change the value.

By the way, the modern kernel already uses a rule of thumb to set file-max based on the amount of memory in the system; from fs/file_table.c in the 2.6 kernel:

* One file with associated inode and dcache is very roughly 1K.
* Per default don't use more than 10 of our memory for files.

n = (mempages * (PAGE_SIZE / 1024)) / 10;
files_stat.max_files = max_t(unsigned long, n, NR_FILE);

and files_stat.max_files is the setting of fs.file-max; this ends up being about 100 for every 1MB of ram.
From that estimate, 4,000,000 files would need about 40GB just for the file table (4,000,000/100 =40,000 MB). I would expect the system is hung.

If you can boot into single user mode you should be able to remove/comment out those lines you put in /etc/sysctl.conf to recover.

Last edited by jpollard; 12-25-2015 at 06:12 PM.


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
how many open file descriptors do I dare use? hydraMax Programming 1 02-23-2012 08:39 PM
LXer: Finding The Number Of Open File Descriptors Per Process On Linux And Unix LXer Syndicated Linux News 1 11-25-2009 10:07 AM
Increasing File Descriptors Geoffrey_Firmin Slackware 5 04-21-2008 10:48 AM
increasing the number of file descriptors on RHEL8 mingram27 Fedora 1 02-21-2007 01:41 PM
increasing maximum no of file descriptors kshkid Programming 9 02-21-2006 11:02 AM > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 12:51 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration