LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (http://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   how to set memory limit for egrep command (http://www.linuxquestions.org/questions/linux-newbie-8/how-to-set-memory-limit-for-egrep-command-4175411626/)

masuch 06-15-2012 10:43 AM

how to set memory limit for egrep command
 
Hi,

I have a problem with egrep command which is running within some application which I did not write.

Even if I have 32 Gbytes ram it is apparently not enough for egrep - because he wants more - about 60Gbytes virtual (-> so it is swapping -> computer is unusable -> no reaction on keyboard or extremely slow ...)

Is it possible to setup max memory limit consumption for egrep
(bonus: or any other command running in terminal - not for GUI applications)

thank you very much for any kind of solution/s. I am willing to try anything.
Kind Regards,
Martin

Skaperen 06-15-2012 03:48 PM

In bash there is the "ulimit -v kkkkkk" command. Execute it in a subshell to affect one command:

( ulimit -v 4194304 && egrep ... )

This example sets the virtual memory limit to 4GB.

masuch 06-16-2012 02:14 PM

Quote:

Originally Posted by Skaperen (Post 4704306)
In bash there is the "ulimit -v kkkkkk" command. Execute it in a subshell to affect one command:

( ulimit -v 4194304 && egrep ... )

This example sets the virtual memory limit to 4GB.


-Thanks a lot for this very useful hint - going to implement it in many scripts running from crontab.
-(Concerning this problem I have already created crontab script which runs each 60 seconds and kill process which consume more than 30 Gbytes ram)
- One thing - is it possible to setup such limit if I do not have possibility to execute it by subshell.
(Would it be possible to use /etc/security/limits.conf in someway for this purpose or something else ?)

thank you,
regards,
Martin

masuch 06-16-2012 03:53 PM

Hi,

I have changed ulimit -v (temporarily in terminal) to:
virtual memory (kbytes, -v) 1000

and commands:

$ sudo locate aaaa
Segmentation fault
$ free -m
Segmentation fault
ls
bash: /usr/local/bin/ls: Argument list too long

pretty ugly - what is happening ?

chrism01 06-18-2012 09:41 PM

Pretty much what it says ... 1MB (approx) is not enough for those cmds to run with those params ... ;)

masuch 06-19-2012 05:16 AM

(
:-) yes indeed. (could not imagine why for some simple command/s is not enough 50000*1024 ?
- where is the time I had to be happy with 64 kbytes for code segment , stack segment , data segment and extra segment all together :-)
)

(I have just moved all my activities to my cloned OS raid0 (4 partitions) on ssd revodrive - it is really extremely fast.).
1. So I can see it helped to decrease latency within this stuff when it suddenly happened.
2. I have daemon script which is killing processes with extremely big memory consumption.
3. changed my swapfile to 1 Gigabyte and location to ssd revodrive and swappiness=1.
4. more playing with ulimit for my daemons - especially for onlive rsyncing/cloning my OS.
5. setup tmpfs 7G /tmp to memory
hope it helped in future if it is happened again.

? I still do not know what application is starting this egrep - extremely memory consumption command , top and htop shows just egrep ?
do not know how to find it.

going to mark it as solved.

thank you for your help.

Does exist something like ulimit for swap consumption per process ? (ulimit does not have it if i did not overlook something)

kind regards,
M.

chrism01 06-19-2012 08:46 PM

As a short term thing, lookup ulimit options here http://linux.die.net/man/1/bash & also /etc/security/limits.conf.
In the long term, track down the cause; start with top and/or ps and check the owner and the PID, PPID values.
See http://linux.die.net/man/1/pstree


All times are GMT -5. The time now is 10:34 PM.