LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Limit CPU Usage per user + logging (https://www.linuxquestions.org/questions/linux-newbie-8/limit-cpu-usage-per-user-logging-724769/)

gerardosan 05-08-2009 07:31 PM

Limit CPU Usage per user + logging
 
I work for a webhosting company, and I’m doing some research to how avoid some users abuse of the server cpu and ram, we are doing this with PAM & limits .conf, but we are not able to limit to a cpu time under 1 minute, we want to set up around 30 seconds, tried with ulimit but isn't not easy to setup for a group of users or generate logs.

And don’t want to limit “as is” I want to warn the user i case they user over 30" and a hard limit over that, so I need a tool that write any log, any idea where to start my search?

Regards

chrism01 05-10-2009 08:16 PM

From the man page for ulimit (comes under bash)
Quote:

ulimit [-SHacdefilmnpqrstuvx [limit]]
Provides control over the resources available to the shell and
to processes started by it, on systems that allow such control.
The -H and -S options specify that the hard or soft limit is set
for the given resource. A hard limit cannot be increased once
it is set; a soft limit may be increased up to the value of the
hard limit. If neither -H nor -S is specified, both the soft
and hard limits are set. The value of limit can be a number in
the unit specified for the resource or one of the special values
hard, soft, or unlimited, which stand for the current hard
limit, the current soft limit, and no limit, respectively. If
limit is omitted, the current value of the soft limit of the
resource is printed, unless the -H option is given. When more
than one resource is specified, the limit name and unit are
printed before the value. Other options are interpreted as fol-
lows:
-a All current limits are reported
-c The maximum size of core files created
-d The maximum size of a process’s data segment
-e The maximum scheduling priority ("nice")
-f The maximum size of files written by the shell and its
children
-i The maximum number of pending signals
-l The maximum size that may be locked into memory
-m The maximum resident set size (has no effect on Linux)
-n The maximum number of open file descriptors (most systems
do not allow this value to be set)
-p The pipe size in 512-byte blocks (this may not be set)
-q The maximum number of bytes in POSIX message queues
-r The maximum real-time scheduling priority
-s The maximum stack size
-t The maximum amount of cpu time in seconds
-u The maximum number of processes available to a single
user
-v The maximum amount of virtual memory available to the
shell
-x The maximum number of file locks

If limit is given, it is the new value of the specified resource
(the -a option is display only). If no option is given, then -f
is assumed. Values are in 1024-byte increments, except for -t,
which is in seconds, -p, which is in units of 512-byte blocks,
and -n and -u, which are unscaled values. The return status is
0 unless an invalid option or argument is supplied, or an error
occurs while setting a new limit.
Which says you can have hard AND soft limits, and the cpu limit (-t) is measured in seconds ...
That should get you started...

gerardosan 05-11-2009 12:40 PM

"I tried with ulimit but isn't not easy to setup for a group of users or generate logs."

Maybe i need to program a script that load ulimit and set user per user the Hard and soft Limit, and need to be setup after each reboot and not on config file like limits.conf.

Thats why i was avoiding to use ulimit

If you know another alternative will be appreciated.

Thanks chrism01

gerardosan 05-11-2009 01:19 PM

And i forgot "ulimit" Just limit a process under shell. And the biggest matter is php, perl, apache etc.

Regards

Tinkster 05-11-2009 02:46 PM

Sorry, but that's just silly ... how do you expect an apache service to
respond when you "tell it" that it's about to reach its limit of execution
time?

What is your objective on that machine, what are you trying to achieve?



Cheers,
Tink

gerardosan 05-12-2009 10:29 AM

Maybe i did not explain me on best way!

We shared hosting, which consist on many small web sites on same server.

Statistically, when one of the clients needs more resources others are not, this makes this ecosystem in theory works.

In reality some of theses users are in some cases upload "heavy visitors" content, sometimes illegal sometimes not or simple one of them reach the front page of dig and get thousands of extra visitors thats translate on a over usage of the server resources, destroying the theoric ecosystem.

We want to control this over usages form this clients avoiding all users were affected by this small % of big web sites.


If we use limits.conf o ulimit we can limit the use on a shell and "username" friendly applications like php (with suexec), but what about the other apps, like apache, mysql, postgre, disk i/o, etc where the whole application is running under same user (mysql)or no expedit way to get the abusing user.

So we are looking a way to distribute resources wisely to avoid resources peaks, slowing down hungry processes and prioritize the others.

resources we want to control:

Apache
PHP/Perl
Mysql
Postgre
Disk I/O

Any Ideas ?

chrism01 05-12-2009 07:38 PM

I think this is the page you're looking for: http://www.uno-code.com/?q=node/64


All times are GMT -5. The time now is 08:40 PM.