Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I work for a webhosting company, and Iím doing some research to how avoid some users abuse of the server cpu and ram, we are doing this with PAM & limits .conf, but we are not able to limit to a cpu time under 1 minute, we want to set up around 30 seconds, tried with ulimit but isn't not easy to setup for a group of users or generate logs.
And donít want to limit ďas isĒ I want to warn the user i case they user over 30" and a hard limit over that, so I need a tool that write any log, any idea where to start my search?
ulimit [-SHacdefilmnpqrstuvx [limit]]
Provides control over the resources available to the shell and
to processes started by it, on systems that allow such control.
The -H and -S options specify that the hard or soft limit is set
for the given resource. A hard limit cannot be increased once
it is set; a soft limit may be increased up to the value of the
hard limit. If neither -H nor -S is specified, both the soft
and hard limits are set. The value of limit can be a number in
the unit specified for the resource or one of the special values
hard, soft, or unlimited, which stand for the current hard
limit, the current soft limit, and no limit, respectively. If
limit is omitted, the current value of the soft limit of the
resource is printed, unless the -H option is given. When more
than one resource is specified, the limit name and unit are
printed before the value. Other options are interpreted as fol-
-a All current limits are reported
-c The maximum size of core files created
-d The maximum size of a process’s data segment
-e The maximum scheduling priority ("nice")
-f The maximum size of files written by the shell and its
-i The maximum number of pending signals
-l The maximum size that may be locked into memory
-m The maximum resident set size (has no effect on Linux)
-n The maximum number of open file descriptors (most systems
do not allow this value to be set)
-p The pipe size in 512-byte blocks (this may not be set)
-q The maximum number of bytes in POSIX message queues
-r The maximum real-time scheduling priority
-s The maximum stack size
-t The maximum amount of cpu time in seconds
-u The maximum number of processes available to a single
-v The maximum amount of virtual memory available to the
-x The maximum number of file locks
If limit is given, it is the new value of the specified resource
(the -a option is display only). If no option is given, then -f
is assumed. Values are in 1024-byte increments, except for -t,
which is in seconds, -p, which is in units of 512-byte blocks,
and -n and -u, which are unscaled values. The return status is
0 unless an invalid option or argument is supplied, or an error
occurs while setting a new limit.
Which says you can have hard AND soft limits, and the cpu limit (-t) is measured in seconds ...
That should get you started...
We shared hosting, which consist on many small web sites on same server.
Statistically, when one of the clients needs more resources others are not, this makes this ecosystem in theory works.
In reality some of theses users are in some cases upload "heavy visitors" content, sometimes illegal sometimes not or simple one of them reach the front page of dig and get thousands of extra visitors thats translate on a over usage of the server resources, destroying the theoric ecosystem.
We want to control this over usages form this clients avoiding all users were affected by this small % of big web sites.
If we use limits.conf o ulimit we can limit the use on a shell and "username" friendly applications like php (with suexec), but what about the other apps, like apache, mysql, postgre, disk i/o, etc where the whole application is running under same user (mysql)or no expedit way to get the abusing user.
So we are looking a way to distribute resources wisely to avoid resources peaks, slowing down hungry processes and prioritize the others.