LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 11-17-2010, 09:51 PM   #1
mckenzig
LQ Newbie
 
Registered: Nov 2010
Location: Australia
Distribution: Fedora
Posts: 3

Rep: Reputation: 0
server "hardening" - users accidentally locking cluster master node


All,

I run a compute cluster with only a few users. Occasionally a user will accidently run a job on the master node that runs out RAM/swaps then hanges up for a while.

In /etc/security/limits.conf I have set memlock to 7.5GB (master has 8GB RAM) and maybe that is what lets the machine come back rather than hanging completely?

Is this the right setting to physocally limit a single user from asking for more RAM than the system has and bringing down the system? Should I set this to 2GB or so or is there something else I can do??

Any help much appreciated.
 
Old 11-18-2010, 09:13 AM   #2
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora
Posts: 3,935
Blog Entries: 5

Rep: Reputation: Disabled
What OS? (Fedora?) What version?

If you don't want users chewing up more than 2GB RAM, then setting memlock to 7.5GB seems like a bad idea. Maybe you should post your limits.conf.
 
1 members found this post helpful.
Old 11-18-2010, 05:03 PM   #3
mckenzig
LQ Newbie
 
Registered: Nov 2010
Location: Australia
Distribution: Fedora
Posts: 3

Original Poster
Rep: Reputation: 0
Thanks for the reply

System is running 64-bit Fedora core 6. (Quite old now but I leave the OS alone until the hardware is replaced)

The 7.5 GB was there for another reason but I was asking whether it helped prevent total melt down of the system?

My secondary question was whether using the same method, but reducing the 7.5GB to say 2GB would prevent a user starting an application then reading in massive data files way beyong the memory on the system? (either not understanding what they are doing, or accidently starting a local job that was intented to run on many cluster nodes.

Here are my current entries in limits.conf

* soft memlock 7500000
* hard memlock 7500000

Thanks again

Greg.
 
Old 11-18-2010, 05:24 PM   #4
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora
Posts: 3,935
Blog Entries: 5

Rep: Reputation: Disabled
Quote:
Originally Posted by mckenzig
The 7.5 GB was there for another reason but I was asking whether it helped prevent total melt down of the system?
Probably. Thing is: if there are enough processes eating up memory that are not affected by pam_limits(8) - i.e. daemons and other processes that don't have PAM support baked in or aren't configured correctly - you could still technically bring down your system. It sounds like your main concern is rogue/errant users, though.

Quote:
Originally Posted by mckenzig
My secondary question was whether using the same method, but reducing the 7.5GB to say 2GB would prevent a user starting an application then reading in massive data files way beyong the memory on the system? (either not understanding what they are doing, or accidently starting a local job that was intented to run on many cluster nodes.
Sure, assuming we're still talking about your system with 8GB RAM. 2GB would be a more sane restriction, IMO, depending on what the users are legitimately supposed to be doing. (As with all things: test this out to be sure pam_limits(8) is behaving the way you'd expect after making any changes.)
 
1 members found this post helpful.
Old 11-18-2010, 05:38 PM   #5
mckenzig
LQ Newbie
 
Registered: Nov 2010
Location: Australia
Distribution: Fedora
Posts: 3

Original Poster
Rep: Reputation: 0
Thanks again

My main concern was whether the memlock setting was doing what I thought from what you say it seems that it is. The setting of an appropriate value is the trick.

Too low could be overly restrictive of legitimate usage and too high still allows the system to lock up with accumulated other memory usage adding to the trouble maker process.

I think I might go with 4GB next time I can squeeze in a reboot and do some testing to check that it stops me using more than that.

Cheers,

Greg.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
NFS mount on 2 node "cluster" machielr Linux - Server 1 11-16-2010 10:22 AM
Locking problem on two node CentOS cluster (LVM) senti Linux - Server 0 08-31-2010 09:13 AM
Why my wireless card "atheros" run only as an acces point in "Master" mode. jardiamj Linux - Wireless Networking 3 11-11-2007 12:12 AM
Linux cluster - master node can't connect to slave nodes anymore Baerek Linux - Networking 6 03-30-2007 02:02 PM
ut2003 cant play internet games. freeses at "querying master server" qwijibow Linux - Games 12 06-28-2004 04:20 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 11:56 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration