How to prevent internal DOS using for loop script ?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
You can set limits for users - have you tried experimenting with this? Type man limits in a shell and see if it does what you want. For example you can limit the priority that user tasks run at.
I was wondering, if i provide some one SSH access to server and if person writes simple script like for i=0 to i=n, echo DOS and executes that script.
It will echo indefinitely, now what are the solutions to prevent that ?
PS : It's local server which tons of people accesses, iptables and selinux are disabled.
At night rsync runs which syncs huge amount of data so i cannot use script that kill processes based on load & cpu.
Not that much of a DOS. All it really ties up is the ssh connection. Once the connection is broken, so is the script. The reason it doesn't tie up much is that it takes a long time to flush the I/O buffers to output the lines of "DOS". During that time a lot of other activity can be done.
A real DOS attack is a fork bomb. (a loop that just puts other processes doing the same loop in the background). Much harder to kill because new processes can be spawned faster than they can be killed. The cure is proper limits for your system.
Another DOS attack is the program that gradually allocates memory - then fills it with nonzero values. It has to be done "relatively" slowly to avoid being killed by the kernel OOM killer. Double whammy when combined with the fork bomb. Again, the cure is proper limits for your system.
Another DOS attack is to use up all the free space in /tmp. Note: on Fedora systems you have to do it in two places - 1: /tmp (tmpfs mount) this causes problems for everybody though it doesn't kill the system. 2: /run (also a tmpfs mount). Filling this sucker will cause severe problems as the system daemons also use it to record pid files, user authorization keys ... Once /run is filled, no user can login. The really cool fact is that nearly all evidence is destroyed when the system gets rebooted. There is no fix for either of these except to not use tmpfs. If using a real disk, then you can prevent it by establishing user quotas (tmpfs doesn't support quotas).
Combine the memory eating with the tmpfs space eating and the system will likely deadlock.
Not that much of a DOS. All it really ties up is the ssh connection. Once the connection is broken, so is the script. The reason it doesn't tie up much is that it takes a long time to flush the I/O buffers to output the lines of "DOS". During that time a lot of other activity can be done.
A real DOS attack is a fork bomb. (a loop that just puts other processes doing the same loop in the background). Much harder to kill because new processes can be spawned faster than they can be killed. The cure is proper limits for your system.
Another DOS attack is the program that gradually allocates memory - then fills it with nonzero values. It has to be done "relatively" slowly to avoid being killed by the kernel OOM killer. Double whammy when combined with the fork bomb. Again, the cure is proper limits for your system.
Another DOS attack is to use up all the free space in /tmp. Note: on Fedora systems you have to do it in two places - 1: /tmp (tmpfs mount) this causes problems for everybody though it doesn't kill the system. 2: /run (also a tmpfs mount). Filling this sucker will cause severe problems as the system daemons also use it to record pid files, user authorization keys ... Once /run is filled, no user can login. The really cool fact is that nearly all evidence is destroyed when the system gets rebooted. There is no fix for either of these except to not use tmpfs. If using a real disk, then you can prevent it by establishing user quotas (tmpfs doesn't support quotas).
Combine the memory eating with the tmpfs space eating and the system will likely deadlock.
thank you so much, I did not know about fork bomb's and I researched about it and tested it.
I had few questions :-
When I run fork bomb as ordinary user, than after some time I get following error :
" Resources temporary unavailable "
And in TOP ( from root account ), that user uses around 1% of CPU ( after running fork ).
From the root account when i run fork , it crashes the server instantly.
Now problem is that, i have not yet set any limits in limits.conf in /etc/security, still the limits are been set automatically for ordinary user so fork is not effective.
However when I try using for loop ( echo 999999 ) times, than the same user was able to use upto 90% of CPU and memory for few seconds and after than Kernel kills it automatically.
So echo seems much more effective from user point of view when user wants to send simple DOS to the server.!
So you've discovered there are default limits for ordinary users.
What do you need help with specifically?
When I use fork, the user is able to use upto reach 0.7% CPU and than i get " Resource temporary unavailable " however when i use a bash script ( for loop from 1 to 999999 .. echo something ) than it reaches upto 99% of CPU in just few seconds.
I was wondering how are the default limits of ordinary users set than if fork cannot get more resource and simple bash script can reach upto 99% CPU easily.
And what are the ways to prevent this ?
Is /etc/security/limits.conf the best way to limit the process and cpu that can be max used by a user or a group ?
When I use fork, the user is able to use upto reach 0.7% CPU and than i get " Resource temporary unavailable " however when i use a bash script ( for loop from 1 to 999999 .. echo something ) than it reaches upto 99% of CPU in just few seconds.
That is because the shell performing the loop takes a good bit of CPU time just doing the loop. The fork alone takes very little.
Quote:
I was wondering how are the default limits of ordinary users set than if fork cannot get more resource and simple bash script can reach upto 99% CPU easily.
Users get CPU time if they haven't reached their limit OR nothing else needs the CPU.
Fork only get limited when the users limit is reached OR the process table is full (resource is temporarily unavailable in both cases because the resource becomes available as soon as a process terminates).
Quote:
And what are the ways to prevent this ?
Is /etc/security/limits.conf the best way to limit the process and cpu that can be max used by a user or a group ?
You can use limits as above. If you're really paranoid, also create chroot jail for them and put them in there as well.
oh, well yes, looks like that is the best we can do O.O if in web sever, we can make use of Cloud Linux for better efficiency and control of resources.
oh, well yes, looks like that is the best we can do O.O if in web sever, we can make use of Cloud Linux for better efficiency and control of resources.
That depends. A CGI can always set its hard/soft limits itself. Of course, most CGI applications don't implement that, so the entire server can be give hard/soft limits in the apache startup script.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.