Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I know that it is possible to renice a process so that its priority over the CPU usage gets reduced and that it gets less CPU usage when another program needs it (given that it has a higher nice value), but is it also possible to limit the % of CPU resources no matter what nice value it has?
[alex@localhost alex]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4094
virtual memory (kbytes, -v) unlimited
[alex@localhost alex]$ help ulimit
ulimit: ulimit [-SHacdflmnpstuv] [limit]
Ulimit provides control over the resources available to processes
started by the shell, on systems that allow such control. If an
option is given, it is interpreted as follows:
-S use the `soft' resource limit
-H use the `hard' resource limit
-a all current limits are reported
-c the maximum size of core files created
-d the maximum size of a process's data segment
-f the maximum size of files created by the shell
-l the maximum size a process may lock into memory
-m the maximum resident set size
-n the maximum number of open file descriptors
-p the pipe buffer size
-s the maximum stack size
-t the maximum amount of cpu time in seconds
-u the maximum number of user processes
-v the size of virtual memory
If LIMIT is given, it is the new value of the specified resource;
the special LIMIT values `soft', `hard', and `unlimited' stand for
the current soft limit, the current hard limit, and no limit, respectively.
Otherwise, the current value of the specified resource is printed.
If no option is given, then -f is assumed. Values are in 1024-byte
increments, except for -t, which is in seconds, -p, which is in
increments of 512 bytes, and -u, which is an unscaled number of
That's as close to a man page as i have gotten, but none seem to hint towards cpu % limits.. except for the cpu time, but that's time not amount.. or am i wrong? I'm really -> Like, i haven't seen anywhere on the web (in all non-useful man pages) anything about cpu % limitations. Just a lot of stuff about core dumps.
Through all my searching and reading I have not been able to confirm that ulimit can limit the amount (%) of cpu a process or user can consume, instead only the amount of cpu TIME a process can consume. But this is not what I am looking for... I would like to know if anybody knows if it is possible AT ALL in linux to allow one process a maximum of x% of my cpu. From the first two links above, it seems like it is NOT possible. Bummer.
I run folding@home and would like it to run continuously, however limited to no more than 60% of my cpu - it already reniced itself to 19, but that still allows it to consume 100% of my cpu when nothing else is using it.
Please correct me if I am wrong with any of this. Thanks.
Seeing as how it's simply an automated re-nice program, I don't believe it has the ability to limit cpu usage to a percentage. Otherwise, you would have come across that option while searching through the nice documentation. I would bet, for something like this, you'd need to incorporate some new process tracking in the kernel's scheduler that is accessible through user space (either system calls or through the /proc filesystem).
One thing I'd like to ask though: why do you want to limit it to a flat 60% max? If the computer isn't doing anything useful, why not let it consume the whole load? If it was nice'ed to 19, then as soon as you move the mouse/type/whatever, the handlers for those events take over and folding@home gets the shaft so to speak. The only reason I could think would possibly be heat generation from the processor, but surely it's not that bad. I'm not saying trying to limit a process's CPU to a flat percentage is necessarily bad, but I just don't see the need for it (just my opinion).
Last edited by Dark_Helmet; 01-14-2005 at 05:26 PM.
Ok, i thought it was 'and' but what was written was slightly confusing.
Originally posted by Dark_Helmet One thing I'd like to ask though: why do you want to limit it to a flat 60% max? If the computer isn't doing anything useful, why not let it consume the whole load? If it was nice'ed to 19, then as soon as you move the mouse/type/whatever, the handlers for those events take over and folding@home gets the shaft so to speak. The only reason I could think would possibly be heat generation from the processor, but surely it's not that bad.
Yea it's the heat.. I have a laptop and it smoked a week or two ago, and one of my guesses was from overheating (and the other from it being a little bit too dusty). And my laptop, after a while, seems to lag to some degree leaving folding@home on. Maybe it's the memory usage? Who knows.
Well, a poor man's approach might be to write an application that basically does nothing; it just sits idle. Kick it off along with folding@home, and tweak with it until you get approximately 60%.
One idea might be to have a cron job that runs every minute, checks whether a file exists, if it does, run an instance of the idle program. After so many seconds, the program ends, and folding@home can resume again. Nice the idle process to 18 and have it do nothing for 24 seconds (40% of 1 minute). When you want the idle process to run, kick off a shell script that touch'es the trigger file, and just rm that same file when you want idling to stop. To be honest though, I'm not sure which system call can be used to idle a process without allowing premption. Maybe a good question on the programming forum...
It has been a few years, but maybe the following solutions helps someone else:
PID_FOLDATHOME=12345 # the process id of the process to be
# limited to 60% CPU usage over 5 seconds
while True; do
kill -s SIGSTOP $PID_FOLDATHOME
kill -s SIGCONT $PID_FOLDATHOME
This alternates between 0% and up to 100% with a 40 - 60 pulse resulting in max. 60% usage in the long run. If your heat sink is very small, your fan controller might still pick up the changes and annoy you with oscillating fan speed. If so, change it to sleep 1 + sleep 1 (for 50%) or sleep 1 + sleep 2 (for 67%), or maybe your sleep command supports fractional durations in which case you can use sleep 0.4 + sleep 0.6 (don't use very small numbers like 0.04 + 0.06 though; it will cause overhead).
Last edited by jowagner; 11-13-2009 at 01:36 PM.
Reason: grammar and spelling
Cpulimit is a tool which limits the CPU usage of a process (expressed in percentage, not in CPU time). It is useful to control batch jobs, when you don't want them to eat too many CPU cycles. The goal is prevent a process from running for more than a specified time ratio. It does not change the nice value or other scheduling priority settings, but the real CPU usage. Also, it is able to adapt itself to the overall system load, dynamically and quickly.
The control of the used cpu amount is done sending SIGSTOP and SIGCONT POSIX signals to processes.
All the children processes and threads of the specified process will share the same percent of CPU.
Usually, you do want a process to use as much CPU time as is "fairly" available for it ... rather than, say, allow the CPU itself to sit idle. If there's nothing else going on, letting a process have 100% of the otherwise-wasted resource is a Good Thing.
The CPU is usually a "barely used" resource anyway.