[SOLVED] How to limit system resources for application?
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi, sorry probably not correct grammar, yeah it's painful for native Englishmen, generally all my web browsers consume extreme amount of system resources: memory, processor, my feeling is if I would allow they will fork infinitely. First of all I want to cut down number of forks: not thousands but ten should be allowed no more, I think it should be possible - Linux is POSIX and I simply can't imagine admin of mainframe to be unable to set constraints on execution environment - maybe it is per user not per app. I don't know maybe someone would be kind enough to explain these subtleties to me? I never had occasion to work as mainframe admin.
These may not be what you are looking for, but you may want to investigate ulimit (bash) and nice. Linux probably has newer ways than these to restrict programs.
For example type "ulimit -a" to see resources limits.
Last edited by TracyTiger; 02-22-2018 at 10:36 AM.
Hi, sorry probably not correct grammar, yeah it's painful for native Englishmen, generally all my web browsers consume extreme amount of system resources: memory, processor, my feeling is if I would allow they will fork infinitely. First of all I want to cut down number of forks: not thousands but ten should be allowed no more, I think it should be possible - Linux is POSIX and I simply can't imagine admin of mainframe to be unable to set constraints on execution environment - maybe it is per user not per app. I don't know maybe someone would be kind enough to explain these subtleties to me? I never had occasion to work as mainframe admin.
I would mark this thread as solved but maybe there are others who want to share to share their opinion about subject. There ar already 5 thumbs up (excluding myself) seems for me topic is important though not easy. One question I always hesitating was (nearly) real time scheduling for keyboard input. To make three finger salute instant eg.
The best way to do this now is probably to use cgroups. You will want to look into cgcreate and cgexec. I am not sure why you are especially worried about forks; unless you have something that is doing a DOS like a fork bomb the numbers of children and or threads a process creates isn't really a significant resource consumer on today's systems unless the app is doing something entirely unreasonable.
Control groups will give you pretty fine grained control on memory and CPU time a process and its children can use. You can limit forking as well using the PID cgroup, but I suspect you are more likely to break your application than meaningfully address your performance issues with PID constraints. Have a look at the man pages for cgexec and cgcreate. You can tune the properties for the group using the /sys sysfs filesystem. You will probably be most interested in the CPU, CPUSET, and MEMORY sets. There are daemons and rules you can use as well but so far I have found a little shell wrapper around whatever binary I want to launch works every bit as well.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
Cynic and pessimistic I think that your web experience comes to a grinding halt when you impose resource limits.
There was a time when an OS had to be fast (and therefor written in C). Nowadays it seems complete OS-es are implemented in web pages in a script language and have to be executed by the web browser interpreter.
So we are back to the days of GW-Basic, but this time with every bit of dynamic graphic eye-puke you can imagine in a web page so as bring everything to the speed of running molasses.
For me, I have a Dell Inspiron 5759 with a 6th generation i7 processor Dual Core 3.1 GHz. And CPU usage regularly peaks at 100% for all cores when web browsing. The insanity of 2018.
So please share your experiences here after you applied restrictions. I hope I am wrong.
I am not sure why you are especially worried about forks; unless you have something that is doing a DOS like a fork bomb the numbers of children and or threads a process creates isn't really a significant resource consumer on today's systems unless the app is doing something entirely unreasonable.
I blame myself was not enough precise, maybe because I don't quite well understand difference between child process and thread. I guess they both look the same on the output of htop command. But what can I say is that Seamonkey is creating 41 (!) threads, consuming all ca 500MB physical memory.
You could try a leaner browser like Palemoon but do I understand correctly that 512MB of RAM is all you have? If so in 2018 that's a fairly severe limitation.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
Quote:
Originally Posted by igadoter
I blame myself was not enough precise, maybe because I don't quite well understand difference between child process and thread.
The main difference between a child process and a thread is that a child process runs in its own context. That is, it has its own memory space and cannot crash any other process. A thread is (also) a process controlled by the kernel (it can run concurrent with other processes, can be put to sleep) but it shares the memory space of the initiating process. Retrieving a process list of your system (ps ax) will list threads and processes alike.
Although the difference is large from a point of view of process separation and stability, the function is the same. As a matter of fact, the web browser can equally well create threads or processes.
IIRC Chrome creates a process for each tab page and Google claims better stability.
And on topic: you will not be able to run a fully functional heavy weight web browser in 512 MB.
The links I provided in my post have a short introductory section that will give you some explanation on what you're up against while trying to fine tune the system resources allocated for a program. My advice is to read/learn more before you start to experiment, you might break the program(s) functionality if you don't understand all the details/implications of your actions.
On your web browser & limited amount of RAM issue, I second to what was stated in this thread about the resource hungry "modern browsers" and would suggest to give Chromium a try: https://en.wikipedia.org/wiki/Chromium_(web_browser) https://slackbuilds.org/repository/1...work/chromium/ http://www.slackware.com/~alien/slackbuilds/chromium/
It comes with Raspbian for the Raspberry Pi ARM boards and I was able to run it with only one open tab (one page only) under Raspberry Pi0 (512MB RAM). However, if you load a page filled media content (animations, java-scripts & co) then you might well go over the available 512MB and use the Swap (actually you don't even have 512MB because of the Linux System / X Server eating out of it before you launch the Browser).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.