LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Puppy
User Name
Password
Puppy This forum is for the discussion of Puppy Linux.

Notices


Reply
  Search this Thread
Old 02-18-2016, 08:54 PM   #1
Fixit7
Senior Member
 
Registered: Mar 2014
Location: El Lago, Texas
Distribution: Ubuntu_Mate 16.04
Posts: 1,374

Rep: Reputation: 169Reputation: 169
Ulimit to prevent fork bomb


I used ulimit –u 100 to limit processes to 100.

But it does not work, because this fork bomb locks up my system.

Does ulimit not work ?


Quote:
#!/bin/bash
#
# Linux Puppy 6.3.0 SiegeWorks 2016 A.P.K.
#
# fork bomb - be careful !!

){ # first two chars are : and (
:|:&
};:

Last edited by Fixit7; 02-18-2016 at 08:56 PM.
 
Old 02-20-2016, 04:29 PM   #2
John VV
LQ Muse
 
Registered: Aug 2005
Location: A2 area Mi.
Posts: 17,624

Rep: Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651Reputation: 2651
how did you set that at 100 ?

as i recall it is built in as 1024
 
Old 02-20-2016, 06:22 PM   #3
Fixit7
Senior Member
 
Registered: Mar 2014
Location: El Lago, Texas
Distribution: Ubuntu_Mate 16.04
Posts: 1,374

Original Poster
Rep: Reputation: 169Reputation: 169
ulimit –u 100
 
Old 02-20-2016, 06:30 PM   #4
jamison20000e
Senior Member
 
Registered: Nov 2005
Location: ...uncanny valley... infinity\1975; (randomly born:) Milwaukee, WI, US( + travel,) Earth&Mars (I wish,) END BORDER$!◣◢┌∩┐ Fe26-E,e...
Distribution: any GPL that work on freest-HW; has been KDE, CLI, Novena-SBC but open.. http://goo.gl/NqgqJx &c ;-)
Posts: 4,888
Blog Entries: 2

Rep: Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567Reputation: 1567
Code:
man ulimit
Quote:
DESCRIPTION
Warning: This routine is obsolete. Use getrlimit(2), setrlimit(2), and sysconf(3) instead. For the shell command ulimit(), see bash(1).
http://unix.stackexchange.com/questi...y-error-rhel-6
http://blog.infizeal.com/2013/04/def...fork-bomb.html
 
Old 02-20-2016, 08:29 PM   #5
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Fork bombs are tricky... specially recursive ones.

What "ulimit -u 100" does is limit the PROCESS from creating more than 100 new processes. The problem you are seeing is that each new process also gets the same limit - so each of those can fork up to 100 new processes... (and it repeats.... thus locking up the system).

I think what you want can be done using the nproc quota of /etc/security/limits.conf for the given problem user.

Last edited by jpollard; 02-20-2016 at 08:31 PM.
 
Old 02-20-2016, 09:28 PM   #6
Fixit7
Senior Member
 
Registered: Mar 2014
Location: El Lago, Texas
Distribution: Ubuntu_Mate 16.04
Posts: 1,374

Original Poster
Rep: Reputation: 169Reputation: 169
Quote:
Originally Posted by jpollard View Post
Fork bombs are tricky... specially recursive ones.

What "ulimit -u 100" does is limit the PROCESS from creating more than 100 new processes. The problem you are seeing is that each new process also gets the same limit - so each of those can fork up to 100 new processes... (and it repeats.... thus locking up the system).

I think what you want can be done using the nproc quota of /etc/security/limits.conf for the given problem user.
Could you give more details.
 
Old 02-21-2016, 06:25 AM   #7
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by Fixit7 View Post
Could you give more details.
The basic difference is setting hard limits vs setting soft limits. The ulimit command only sets soft limits. You might
try using "ulimit -H -u 100".

The following references describes using the /etc/security/limits.conf file, the fork bomb you used, and how to limit it.

http://gerardnico.com/wiki/linux/limits.conf
http://www.cyberciti.biz/faq/underst...ash-fork-bomb/
http://www.cyberciti.biz/tips/linux-...r-process.html
 
Old 02-24-2016, 01:55 AM   #8
Fixit7
Senior Member
 
Registered: Mar 2014
Location: El Lago, Texas
Distribution: Ubuntu_Mate 16.04
Posts: 1,374

Original Poster
Rep: Reputation: 169Reputation: 169
Does not work.

It is a shame, as the fork bomb would crash most Linux distros.

Windows has no defense, but I would think that Linux would. ??

Last edited by Fixit7; 02-24-2016 at 01:57 AM.
 
Old 02-24-2016, 06:06 AM   #9
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by Fixit7 View Post
Does not work.

It is a shame, as the fork bomb would crash most Linux distros.

Windows has no defense, but I would think that Linux would. ??
:-)

Are you sure? What was the error message and what was recorded in the system logs?

What fork bombs will do is lock a USER. Not lock the system. When the users quota runs out the user is locked out. But a different user should be able to login and use the remaining resources.

The problem is determining what the resources should be. The default Linux limitations are based on a single user on a workstation. I have seen really large servers (512 CPUs, with 512GB of memory) brought to a deadlock by a fork bomb. Simple investigation showed that each user was allowed 74,000 processes...with an unlimited amount of stack space. The system only had 6GB of swap. So yes, it happens.

How to fix? Is not simple as it involves a number of different parameters:

how much memory is allowed per process (stack and heap mostly, but sometimes bss limits too).

how many processes at one time (remember, processes fork all the time, and that COULD lead to duplicating the memory allocation each time. The problem is forks don't usually duplicate all the memory. This part is called "over subscribing memory" and can deadlock the system too.)

How many users... and remember to include each of the system services running as a user too. If you don't, you tend to overestimate the amount of available memory.

How much swap space to have. After multiplying the number of allowed processes times the number of users, multiply by the amount of memory allocated... Now you know the absolute memory required without oversubscription. Deciding on how MUCH over subscription to allow is hard. It is usually guessed at. Multi-threaded applications don't duplicate memory, but they can surprise you because each thread gets a
separate stack, and that can grow really fast.

It is rather hard to lock a system with too many processes (even with a fork bomb) as what starts running out is the available memory+swap.

I have a dual quad, with 8GB of memory and 16GB swap - but I locked it up running Povray. Not by too many processes, but by using up all of main memory and most of swap.

Causing an out of memory lockup is SUPPOSED to be caught by the OOM signal. Unfortunately, it can't guarantee catching them (unless they are really really obvious). Things that use up memory slowly don't signal what is using too much memory - and the system can run out of memory and deadlock anyway. Paging activity takes time...and if all processes are page faulting at about the same rate, none show up as deserving to be aborted.

The other issue depends on the CPU. The active processes have to be handled by a CPU - so how many CPUS do you need for a given number of active processes... Allowing even 100 processes would overload a CPU, so a fork bomb with 100 processes would overload a CPU. It isn't a problem if you have 512 processors though.

General handling calls for a different method. Does the fork bomb impact a different user? This is one of the things that cgroups were created for, but to use them requires having more than one user. The only separation when there is only one, happens to separate system processes from that single user. The only way to actually test that is to attempt to login as a different user to the system that appears
overloaded. This alternate user will get a separate quota. You can think of it as the first user getting half of the system (not accurate, but it is a concept of what happens), the other half belongs to the system services. If the system services aren't using all of their 1/2, then the user gets whatever is left. When a different user logs in, then the new allocation will be 1/3 of the system, the system services are given 1/3, and the other user is cut to 1/3. If the other two users are using all of their quota, then the new user gets 1/3 + whatever the others aren't using...

So you get to check how (and what) the cgroup quota controls are configured to allow.
 
Old 02-24-2016, 07:44 AM   #10
Fixit7
Senior Member
 
Registered: Mar 2014
Location: El Lago, Texas
Distribution: Ubuntu_Mate 16.04
Posts: 1,374

Original Poster
Rep: Reputation: 169Reputation: 169
Xorg.0.log did not show anything. And there was no error message.

I have to reboot twice to get back into Puppy.

# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 23491
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 23491

On a previous Puppy version, the limits.conf would prevent a fork bomb from crashing the system.

But it no longer works. :-(


Quote:
To limit user process just add user name or group or all users to /etc/security/limits.conf file and impose process limitations.
 
Old 02-24-2016, 10:02 AM   #11
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Are you sure you have PAM limits module? I did run across some discussion about not having it on Puppy, but nothing concrete. I guess it is possible it was removed.
 
Old 02-25-2016, 07:04 AM   #12
Fixit7
Senior Member
 
Registered: Mar 2014
Location: El Lago, Texas
Distribution: Ubuntu_Mate 16.04
Posts: 1,374

Original Poster
Rep: Reputation: 169Reputation: 169
I don't think my version of Puppy has a PAM limits module.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
prevent a fork bomb bambeklis Linux - Security 3 05-16-2008 05:57 AM
fork bomb in SLES9 ddaas Linux - Security 6 07-06-2007 12:29 AM
fork bomb (again) ddaas Linux - Security 3 02-22-2007 12:51 PM
fork bomb namit Linux - Security 4 12-06-2005 04:48 PM
Debian's Fork Bomb Prevention / ULIMIT win32sux Debian 2 03-27-2005 10:57 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Puppy

All times are GMT -5. The time now is 06:40 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration