Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Okay, so I am planning on building a little 3D rendering rig, which I will also use as my main pc. I want to be able to constantly be running blender in the background, while still allowing me to use the PC comfortably. I am planning on getting a triple-core cpu. What I was thinking, is that I can restrict blender to two-cores, while I use the third core. Is that possible? Also, is there a way to limit how much physical ram blender can use, but allow it to use unlimited swap space? Perhaps I can run blender as a different user...
I don't see the point of restricting blender to two cores. Wouldn't it be better to have blender running at a lower priority? When foreground activities have zero active threads, why would you want blender using two leaving one unused? When foreground activities have more than one active threads, why would you want blender contending for two of the three threads? Lower priority gives you better behavior in either case compared to limiting it to two threads.
But limiting its physical ram use would make a lot more sense, if you could do it. Ordinary Linux Kernels give you no practical way to do that. I'm not sure what you might be able to do after rebuilding the kernel to include some unusual feature.
ulimit -m is documented as "has no effect on Linux"
If the kernel machinery behind ulimit -m were implemented, that would be nearly what you need, but not quite:
The following over simplified view of page flow attempts to be simple enough to understand, but accurate enough to provide insight.
Most page faults are "soft faults" meaning a page is moved instantly from the cache into the resident set of the current process. Statistically that causes a different page to be removed from some resident set (usually of the same process) into the cache. If that second page is from a different process, then the resident size of the current process grows and more of physical ram is used by this process. That is what ulimit -m, if it worked would limit.
Some page faults are "hard faults". That causes a page from disk to be read into a free page and added to this process's resident set. Statistically that displaces a page from some resident set into the cache (as above) and displaces a third page from the cache to the free pool.
If the foreground processes were all waiting for user action or network traffic or anything else slow, while a low priority job uses CPU time and has hard page faults, even if ulimit -m kept the resident size from growing, the cache would fill with pages from that low priority task and (without a lot of extra book keeping in the support of ulimit -m) the other tasks' resident sets would still be pruned increasing the cache size.
When some high priority task is resumed by user input or network traffic, it would be in an extreme state of memory starvation and would take a while to recover. That is exactly what you want to avoid with ulimit -m, but even if Linux supported ulimit -m, it wouldn't quite do the job.
An effective limit on excess memory use by low priority processes would need to tag pages in the cache with some kind of memory priority. Then when dropping a page from cache to free, it would prefer to drop a low priority page that has been in the cache a short time rather than a high priority page that has been in the cache longer (without memory priority, it drops the page that has been in the cache longer). So far as I know, Linux has no method for memory priority tags on pages.
I think I have found my answer. I could modify /etc/security/limits.conf. That does allow a maximum amount of memory per user, or group. It also limits priorities, and nice values. I tried it on my test linux box (imac g3) and it worked!
What I was thinking, is that I can restrict blender to two-cores, while I use the third core.
As other members have pointed out, this really isn't a good idea. You will have the best performance if you just let the kernel's scheduler do the scheduling. The only real reason for binding a process to one core is for applications that were written for one core and then break if they're running on multi-core machines (I hear KOTOR used to be like that).
I think I have found my answer. I could modify /etc/security/limits.conf. That does allow a maximum amount of memory per user, or group. It also limits priorities, and nice values. I tried it on my test linux box (imac g3) and it worked!
I doubt that is an effective answer for the purpose you described.
I believe it limits the same things you can limit with ulimit, and I believe nothing you can limit with ulimit is close to one of the things you ought to limit (physical ram use).
You should be limiting the nice value. That does an important part of what you want to accomplish. For a specific long running process under your control, there are lots of different easy ways to set its nice value.
Don't be confused by various limits on various types of virtual memory use. For most programs, a limit on a type of virtual memory either has no effect (because the program didn't want that much) or causes the program to crash (because it did want that much). Very few programs have built in detectors for memory limits causing them to switch to slower less memory hogging algorithms instead of crashing when memory limited.
You don't want blender to crash if it tries to use too much "memory". You want it to slow down and use less physical ram. You don't care if it uses a lot of virtual memory. You only want to limit its physical ram use.
I also wish there were a practical way to do that, because I occasionally run multi-week computations on intermittently loaded Linux systems. While it is easy to set the nice value so short term work gets the proper CPU priority, that short term work may still get memory starved because there is no practical prioritization of physical ram use. CPU prioritization indirectly creates some physical memory prioritization, so a running high CPU priority job that gets an adequate resident set size can almost always keep that against interference from a lower priority job. But a high priority job starting or resuming from a long stall without an adequate resident set size might or might not be able to take significant ram away from a lower priority memory hogging task.
Try one of my old favourites, cgroups.
Personally I like to limit tasks to a subset of cores/CPUs - enables me to monitor tests unimpeded on other core(s). Memory can also be (separately) managed as well these days - including swap should you feel the need. See ./Documentation/cgroups, or this LWN article.
Actually, johnsonfire, setting a RAM limit in /etc/security/limits.conf does correctly limit ram usage. I tried limiting the amount of ram a test user could use, so I set the amount of ram to 2 mb as a test, and it couldn't even login, it said "resource unavailable". So, yes /etc/security/limits.conf does work. But you are right limiting the ram might not be a good idea anyway, as I don't want blender to crash. However, in limits.conf, you can change a user's overall process priority, which includes its RAM, I/O, and CPU usage. This kinda helps make sure I get the ram I want (which isn't that much usually).
Last edited by Super TWiT; 10-10-2011 at 12:21 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.