Hello,
i was searching for the way to limit the memory the python script and its childs can use. (in my case it would be 5GB).
The script is launched ./script.sh which contains various conditions and one of it launch the py script:
runtime/bin/python3 "$SCRIPT_DIR/$SCRIPT" --dist_type bundle_linux64 "$@"
First i tried:
prlimit --verbose --memlock=100 ./script.sh
New MEMLOCK limit for pid 1449761: <100:100>
obviously it does nothing:
Code:
me 1449761 0.0 0.0 9720 3344 pts/0 S+ 10:57 0:00 /bin/bash ./script.sh
the resulting .py process that eats 3GB of memory already looks like:
Code:
me 1449763 61.0 7.0 3447140 1149744 pts/0 Dl+ 10:57 9:55 runtime/bin/python3 /home/me/apps/appname/core/start.py --dist_type bundle_linux64
So i am guessing i would have to edit that command inside bash script to something like:
Code:
prlimit --verbose --memlock=5000000 runtime/bin/python3 "$SCRIPT_DIR/$SCRIPT" --dist_type bundle_linux64 "$@"
The prlimit has also other parameters per
its manual page:
Quote:
-l, --memlock[=limits]
Maximum locked-in-memory address space
-v, --as[=limits]
Address space limit.
|
which prlimit command you would suggest for around 5GB memory limiting? Or you would suggest other command to control it? I am OK with the script exceeding such memory be killed (know about data loss/damage possibility).
PS: some people also use this script to kill process when it use too much memory:
https://github.com/pshved/timeout , my recent attempt with it looks like:
Code:
cd /home/me/apps/appname/
wget https://raw.githubusercontent.com/pshved/timeout/master/timeout;chmod +x timeout
./timeout -m 5000000 ./script.sh
After a few days of runtime i did Ctrl+C and see:
SIGNAL CPU 31130.78 MEM 3533244 MAXMEM 4987268 STALE 40537 MAXMEM_RSS 1970184
(process managers shown the process memory usage under 5GB)