[SOLVED] how to set memory limit for egrep command
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a problem with egrep command which is running within some application which I did not write.
Even if I have 32 Gbytes ram it is apparently not enough for egrep - because he wants more - about 60Gbytes virtual (-> so it is swapping -> computer is unusable -> no reaction on keyboard or extremely slow ...)
Is it possible to setup max memory limit consumption for egrep
(bonus: or any other command running in terminal - not for GUI applications)
thank you very much for any kind of solution/s. I am willing to try anything.
Kind Regards,
Martin
In bash there is the "ulimit -v kkkkkk" command. Execute it in a subshell to affect one command:
( ulimit -v 4194304 && egrep ... )
This example sets the virtual memory limit to 4GB.
-Thanks a lot for this very useful hint - going to implement it in many scripts running from crontab.
-(Concerning this problem I have already created crontab script which runs each 60 seconds and kill process which consume more than 30 Gbytes ram)
- One thing - is it possible to setup such limit if I do not have possibility to execute it by subshell.
(Would it be possible to use /etc/security/limits.conf in someway for this purpose or something else ?)
(
:-) yes indeed. (could not imagine why for some simple command/s is not enough 50000*1024 ?
- where is the time I had to be happy with 64 kbytes for code segment , stack segment , data segment and extra segment all together :-)
)
(I have just moved all my activities to my cloned OS raid0 (4 partitions) on ssd revodrive - it is really extremely fast.).
1. So I can see it helped to decrease latency within this stuff when it suddenly happened.
2. I have daemon script which is killing processes with extremely big memory consumption.
3. changed my swapfile to 1 Gigabyte and location to ssd revodrive and swappiness=1.
4. more playing with ulimit for my daemons - especially for onlive rsyncing/cloning my OS.
5. setup tmpfs 7G /tmp to memory
hope it helped in future if it is happened again.
? I still do not know what application is starting this egrep - extremely memory consumption command , top and htop shows just egrep ?
do not know how to find it.
going to mark it as solved.
thank you for your help.
Does exist something like ulimit for swap consumption per process ? (ulimit does not have it if i did not overlook something)
As a short term thing, lookup ulimit options here http://linux.die.net/man/1/bash & also /etc/security/limits.conf.
In the long term, track down the cause; start with top and/or ps and check the owner and the PID, PPID values.
See http://linux.die.net/man/1/pstree
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.