LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 12-25-2015, 04:09 PM   #1
it.is.ismail@gmail.com
LQ Newbie
 
Registered: Dec 2015
Posts: 1

Rep: Reputation: Disabled
Unhappy Linux:: SSH not working after increasing Open Files / File Descriptors (FD)


Hi

I am not able to ssh my ec2 instance [linux based] after I have executed following command.[Before that I am able to ssh to server]

# vim /etc/sysctl.conf
I have update file-max number to 4000000
fs.file-max = 4000000

Then I exit my ec2 insatnce & try to ssh again but of no luck.

I have tried ssh with -v option & what I get is only [debug1: Exit status 254].
Note: This is the only change I have made.
 
Old 12-25-2015, 05:11 PM   #2
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
How much memory does the system have?

It takes memory allocate space for 4,000,000 possible files.

My 8GB system only allocates 815,155 possible files. You very possibly overloaded the system and it can't allocate memory for the process to to be able to open the necessary files.

You really shouldn't need to change the value.

reference: http://stackoverflow.com/questions/6...g-to-my-own-se
Quote:
By the way, the modern kernel already uses a rule of thumb to set file-max based on the amount of memory in the system; from fs/file_table.c in the 2.6 kernel:

/*
* One file with associated inode and dcache is very roughly 1K.
* Per default don't use more than 10 of our memory for files.
*/

n = (mempages * (PAGE_SIZE / 1024)) / 10;
files_stat.max_files = max_t(unsigned long, n, NR_FILE);

and files_stat.max_files is the setting of fs.file-max; this ends up being about 100 for every 1MB of ram.
From that estimate, 4,000,000 files would need about 40GB just for the file table (4,000,000/100 =40,000 MB). I would expect the system is hung.

If you can boot into single user mode you should be able to remove/comment out those lines you put in /etc/sysctl.conf to recover.

Last edited by jpollard; 12-25-2015 at 05:12 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
how many open file descriptors do I dare use? hydraMax Programming 1 02-23-2012 07:39 PM
LXer: Finding The Number Of Open File Descriptors Per Process On Linux And Unix LXer Syndicated Linux News 1 11-25-2009 09:07 AM
Increasing File Descriptors Geoffrey_Firmin Slackware 5 04-21-2008 09:48 AM
increasing the number of file descriptors on RHEL8 mingram27 Fedora 1 02-21-2007 12:41 PM
increasing maximum no of file descriptors kshkid Programming 9 02-21-2006 10:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 03:53 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration