I've taken a look at the threads on here and found various text's that tell me:
1) limits don't work the way people think they work
2) limits may not have the affect that one would expected, unless the code they are supposed to affect is written with limits in mind
3) IF they are used they should be used as part of a startup script, rather than refining the standard ones
Alas this does not dissuade people from demanding we tune them. Seeing as there is the potential for learning something interesting, i'm not disinclined to try
The central theme that continues to arise is that we want to limit the amount of RAM a process can consume. Thus i set the RSS to a hard limit of 8MB, and load my 64MB file into RAM using a quick perl script (three lines of code that reinforce once again that i need to learns some perl), and in another shell observe that the script is indeed consuming 13% of my 512MB RAM, which is my file + a slight overhead.
ulimit -a tells me my RSS is fixed at 8192 KBs, which is also what i set the Data Size to in /etc/security/limits.conf
* hard data 8192
* hard rss 8192
If anything this validates point 1 from above, it doesn't work the way at the very least i think it works, and 2) my code definitely did not consider what limits might be imposed. 3) is a mute point.
Any ideas how i can set a limit which limits memory consumption at xMB and actually have it honoured?