LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 06-30-2013, 05:53 AM   #1
sunzeal
LQ Newbie
 
Registered: Sep 2012
Posts: 26

Rep: Reputation: Disabled
How to prevent internal DOS using for loop script ?


Hi

I was wondering, if i provide some one SSH access to server and if person writes simple script like for i=0 to i=n, echo DOS and executes that script.

It will echo indefinitely, now what are the solutions to prevent that ?

PS : It's local server which tons of people accesses, iptables and selinux are disabled.

At night rsync runs which syncs huge amount of data so i cannot use script that kill processes based on load & cpu.
 
Old 06-30-2013, 05:49 PM   #2
gilead
Senior Member
 
Registered: Dec 2005
Location: Brisbane, Australia
Distribution: Slackware64 14.0
Posts: 4,125

Rep: Reputation: 164Reputation: 164
You can set limits for users - have you tried experimenting with this? Type man limits in a shell and see if it does what you want. For example you can limit the priority that user tasks run at.
 
Old 06-30-2013, 11:25 PM   #3
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,683

Rep: Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259
Quote:
Originally Posted by sunzeal View Post
Hi

I was wondering, if i provide some one SSH access to server and if person writes simple script like for i=0 to i=n, echo DOS and executes that script.

It will echo indefinitely, now what are the solutions to prevent that ?

PS : It's local server which tons of people accesses, iptables and selinux are disabled.

At night rsync runs which syncs huge amount of data so i cannot use script that kill processes based on load & cpu.
Not that much of a DOS. All it really ties up is the ssh connection. Once the connection is broken, so is the script. The reason it doesn't tie up much is that it takes a long time to flush the I/O buffers to output the lines of "DOS". During that time a lot of other activity can be done.

A real DOS attack is a fork bomb. (a loop that just puts other processes doing the same loop in the background). Much harder to kill because new processes can be spawned faster than they can be killed. The cure is proper limits for your system.

Another DOS attack is the program that gradually allocates memory - then fills it with nonzero values. It has to be done "relatively" slowly to avoid being killed by the kernel OOM killer. Double whammy when combined with the fork bomb. Again, the cure is proper limits for your system.

Another DOS attack is to use up all the free space in /tmp. Note: on Fedora systems you have to do it in two places - 1: /tmp (tmpfs mount) this causes problems for everybody though it doesn't kill the system. 2: /run (also a tmpfs mount). Filling this sucker will cause severe problems as the system daemons also use it to record pid files, user authorization keys ... Once /run is filled, no user can login. The really cool fact is that nearly all evidence is destroyed when the system gets rebooted. There is no fix for either of these except to not use tmpfs. If using a real disk, then you can prevent it by establishing user quotas (tmpfs doesn't support quotas).

Combine the memory eating with the tmpfs space eating and the system will likely deadlock.
 
2 members found this post helpful.
Old 07-03-2013, 02:09 AM   #4
sunzeal
LQ Newbie
 
Registered: Sep 2012
Posts: 26

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by jpollard View Post
Not that much of a DOS. All it really ties up is the ssh connection. Once the connection is broken, so is the script. The reason it doesn't tie up much is that it takes a long time to flush the I/O buffers to output the lines of "DOS". During that time a lot of other activity can be done.

A real DOS attack is a fork bomb. (a loop that just puts other processes doing the same loop in the background). Much harder to kill because new processes can be spawned faster than they can be killed. The cure is proper limits for your system.

Another DOS attack is the program that gradually allocates memory - then fills it with nonzero values. It has to be done "relatively" slowly to avoid being killed by the kernel OOM killer. Double whammy when combined with the fork bomb. Again, the cure is proper limits for your system.

Another DOS attack is to use up all the free space in /tmp. Note: on Fedora systems you have to do it in two places - 1: /tmp (tmpfs mount) this causes problems for everybody though it doesn't kill the system. 2: /run (also a tmpfs mount). Filling this sucker will cause severe problems as the system daemons also use it to record pid files, user authorization keys ... Once /run is filled, no user can login. The really cool fact is that nearly all evidence is destroyed when the system gets rebooted. There is no fix for either of these except to not use tmpfs. If using a real disk, then you can prevent it by establishing user quotas (tmpfs doesn't support quotas).

Combine the memory eating with the tmpfs space eating and the system will likely deadlock.

thank you so much, I did not know about fork bomb's and I researched about it and tested it.

I had few questions :-

When I run fork bomb as ordinary user, than after some time I get following error :

" Resources temporary unavailable "

And in TOP ( from root account ), that user uses around 1% of CPU ( after running fork ).

From the root account when i run fork , it crashes the server instantly.

Now problem is that, i have not yet set any limits in limits.conf in /etc/security, still the limits are been set automatically for ordinary user so fork is not effective.

However when I try using for loop ( echo 999999 ) times, than the same user was able to use upto 90% of CPU and memory for few seconds and after than Kernel kills it automatically.

So echo seems much more effective from user point of view when user wants to send simple DOS to the server.!

I am newbie, so any help would be appreciated
 
Old 07-03-2013, 02:44 AM   #5
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.9, Centos 7.3
Posts: 17,347

Rep: Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365
So you've discovered there are default limits for ordinary users.
What do you need help with specifically?
 
Old 07-03-2013, 03:29 AM   #6
sunzeal
LQ Newbie
 
Registered: Sep 2012
Posts: 26

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by chrism01 View Post
So you've discovered there are default limits for ordinary users.
What do you need help with specifically?
When I use fork, the user is able to use upto reach 0.7% CPU and than i get " Resource temporary unavailable " however when i use a bash script ( for loop from 1 to 999999 .. echo something ) than it reaches upto 99% of CPU in just few seconds.

I was wondering how are the default limits of ordinary users set than if fork cannot get more resource and simple bash script can reach upto 99% CPU easily.

And what are the ways to prevent this ?

Is /etc/security/limits.conf the best way to limit the process and cpu that can be max used by a user or a group ?
 
Old 07-03-2013, 05:17 AM   #7
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,683

Rep: Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259
Quote:
Originally Posted by sunzeal View Post
When I use fork, the user is able to use upto reach 0.7% CPU and than i get " Resource temporary unavailable " however when i use a bash script ( for loop from 1 to 999999 .. echo something ) than it reaches upto 99% of CPU in just few seconds.
That is because the shell performing the loop takes a good bit of CPU time just doing the loop. The fork alone takes very little.
Quote:
I was wondering how are the default limits of ordinary users set than if fork cannot get more resource and simple bash script can reach upto 99% CPU easily.
Users get CPU time if they haven't reached their limit OR nothing else needs the CPU.

Fork only get limited when the users limit is reached OR the process table is full (resource is temporarily unavailable in both cases because the resource becomes available as soon as a process terminates).

Quote:
And what are the ways to prevent this ?

Is /etc/security/limits.conf the best way to limit the process and cpu that can be max used by a user or a group ?
For the time being, yes.

Last edited by jpollard; 07-03-2013 at 05:23 AM.
 
1 members found this post helpful.
Old 07-03-2013, 08:32 PM   #8
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.9, Centos 7.3
Posts: 17,347

Rep: Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365Reputation: 2365
You can use limits as above. If you're really paranoid, also create chroot jail for them and put them in there as well.
 
1 members found this post helpful.
Old 07-05-2013, 12:58 AM   #9
sunzeal
LQ Newbie
 
Registered: Sep 2012
Posts: 26

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by chrism01 View Post
You can use limits as above. If you're really paranoid, also create chroot jail for them and put them in there as well.
oh, well yes, looks like that is the best we can do O.O if in web sever, we can make use of Cloud Linux for better efficiency and control of resources.
 
Old 07-05-2013, 05:55 AM   #10
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,683

Rep: Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259Reputation: 1259
Quote:
Originally Posted by sunzeal View Post
oh, well yes, looks like that is the best we can do O.O if in web sever, we can make use of Cloud Linux for better efficiency and control of resources.
That depends. A CGI can always set its hard/soft limits itself. Of course, most CGI applications don't implement that, so the entire server can be give hard/soft limits in the apache startup script.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Prevent DNS DoS attack vikas027 Linux - Software 5 05-31-2010 11:39 PM
Iptables: prevent spoofing with multiple internal nets? oasisbhrnw99 Linux - Security 2 04-21-2009 12:36 PM
How to limit logins to postfix mail server to prevent DOS imi@tux Linux - Server 2 04-30-2007 01:23 AM
How to prevent DoS attacks m_thangbk Linux - Security 13 07-19-2005 07:19 AM
FAQ: Internal Server Error / Premature End of Script Header / MS-DOS Carriage Return rebel Red Hat 2 04-30-2005 03:11 PM


All times are GMT -5. The time now is 05:05 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration