LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 10-06-2011, 09:59 AM   #1
Super TWiT
Member
 
Registered: Oct 2009
Location: Cyberville
Distribution: Debian
Posts: 132

Rep: Reputation: 16
Limit Computing Resources?


Okay, so I am planning on building a little 3D rendering rig, which I will also use as my main pc. I want to be able to constantly be running blender in the background, while still allowing me to use the PC comfortably. I am planning on getting a triple-core cpu. What I was thinking, is that I can restrict blender to two-cores, while I use the third core. Is that possible? Also, is there a way to limit how much physical ram blender can use, but allow it to use unlimited swap space? Perhaps I can run blender as a different user...
 
Old 10-06-2011, 10:13 AM   #2
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
I don't see the point of restricting blender to two cores. Wouldn't it be better to have blender running at a lower priority? When foreground activities have zero active threads, why would you want blender using two leaving one unused? When foreground activities have more than one active threads, why would you want blender contending for two of the three threads? Lower priority gives you better behavior in either case compared to limiting it to two threads.

But limiting its physical ram use would make a lot more sense, if you could do it. Ordinary Linux Kernels give you no practical way to do that. I'm not sure what you might be able to do after rebuilding the kernel to include some unusual feature.
 
Old 10-06-2011, 11:13 AM   #3
Super TWiT
Member
 
Registered: Oct 2009
Location: Cyberville
Distribution: Debian
Posts: 132

Original Poster
Rep: Reputation: 16
Okay, that makes sense. I thought you could limit a user's ram usage with ulimit. Here's an article on how to do something like this with BSD.

EDIT
Actually, this looks really useful.

Last edited by Super TWiT; 10-06-2011 at 11:25 AM.
 
Old 10-06-2011, 12:59 PM   #4
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Super TWiT View Post
Actually, this looks really useful.
ulimit -m is documented as "has no effect on Linux"

If the kernel machinery behind ulimit -m were implemented, that would be nearly what you need, but not quite:

The following over simplified view of page flow attempts to be simple enough to understand, but accurate enough to provide insight.

Most page faults are "soft faults" meaning a page is moved instantly from the cache into the resident set of the current process. Statistically that causes a different page to be removed from some resident set (usually of the same process) into the cache. If that second page is from a different process, then the resident size of the current process grows and more of physical ram is used by this process. That is what ulimit -m, if it worked would limit.

Some page faults are "hard faults". That causes a page from disk to be read into a free page and added to this process's resident set. Statistically that displaces a page from some resident set into the cache (as above) and displaces a third page from the cache to the free pool.

If the foreground processes were all waiting for user action or network traffic or anything else slow, while a low priority job uses CPU time and has hard page faults, even if ulimit -m kept the resident size from growing, the cache would fill with pages from that low priority task and (without a lot of extra book keeping in the support of ulimit -m) the other tasks' resident sets would still be pruned increasing the cache size.

When some high priority task is resumed by user input or network traffic, it would be in an extreme state of memory starvation and would take a while to recover. That is exactly what you want to avoid with ulimit -m, but even if Linux supported ulimit -m, it wouldn't quite do the job.

An effective limit on excess memory use by low priority processes would need to tag pages in the cache with some kind of memory priority. Then when dropping a page from cache to free, it would prefer to drop a low priority page that has been in the cache a short time rather than a high priority page that has been in the cache longer (without memory priority, it drops the page that has been in the cache longer). So far as I know, Linux has no method for memory priority tags on pages.

Last edited by johnsfine; 10-06-2011 at 01:04 PM.
 
Old 10-06-2011, 05:01 PM   #5
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Hi,

I would look at 'Blender (software) - Wikipedia';
Quote:
Blender hardware production requirements;
2 GHz, Multi-core (64-bit)
8-16GB
OpenGL card with 1 GB RAM, ATI FireGL or Nvidia Quadro
1920×1200 pixels, 24-bit color
Three-button mouse and a graphics tablet
The Blender requirements are listed so look at the chart on the Blender wiki page.
 
Old 10-09-2011, 10:03 AM   #6
Super TWiT
Member
 
Registered: Oct 2009
Location: Cyberville
Distribution: Debian
Posts: 132

Original Poster
Rep: Reputation: 16
I think I have found my answer. I could modify /etc/security/limits.conf. That does allow a maximum amount of memory per user, or group. It also limits priorities, and nice values. I tried it on my test linux box (imac g3) and it worked!
 
Old 10-09-2011, 10:55 AM   #7
dugan
LQ Guru
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 11,226

Rep: Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320
Quote:
Originally Posted by Super TWiT View Post
What I was thinking, is that I can restrict blender to two-cores, while I use the third core.
As other members have pointed out, this really isn't a good idea. You will have the best performance if you just let the kernel's scheduler do the scheduling. The only real reason for binding a process to one core is for applications that were written for one core and then break if they're running on multi-core machines (I hear KOTOR used to be like that).

If you still want to do it, you can use taskset.

Here's a tutorial:

http://www.serverwatch.com/tutorials...rity-Tasks.htm

Here's some background information about how it works:

http://www.linuxjournal.com/article/6799

Last edited by dugan; 10-09-2011 at 10:59 AM.
 
Old 10-09-2011, 11:08 AM   #8
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Super TWiT View Post
I think I have found my answer. I could modify /etc/security/limits.conf. That does allow a maximum amount of memory per user, or group. It also limits priorities, and nice values. I tried it on my test linux box (imac g3) and it worked!
I doubt that is an effective answer for the purpose you described.

I believe it limits the same things you can limit with ulimit, and I believe nothing you can limit with ulimit is close to one of the things you ought to limit (physical ram use).

You should be limiting the nice value. That does an important part of what you want to accomplish. For a specific long running process under your control, there are lots of different easy ways to set its nice value.

Don't be confused by various limits on various types of virtual memory use. For most programs, a limit on a type of virtual memory either has no effect (because the program didn't want that much) or causes the program to crash (because it did want that much). Very few programs have built in detectors for memory limits causing them to switch to slower less memory hogging algorithms instead of crashing when memory limited.

You don't want blender to crash if it tries to use too much "memory". You want it to slow down and use less physical ram. You don't care if it uses a lot of virtual memory. You only want to limit its physical ram use.

I also wish there were a practical way to do that, because I occasionally run multi-week computations on intermittently loaded Linux systems. While it is easy to set the nice value so short term work gets the proper CPU priority, that short term work may still get memory starved because there is no practical prioritization of physical ram use. CPU prioritization indirectly creates some physical memory prioritization, so a running high CPU priority job that gets an adequate resident set size can almost always keep that against interference from a lower priority job. But a high priority job starting or resuming from a long stall without an adequate resident set size might or might not be able to take significant ram away from a lower priority memory hogging task.

Last edited by johnsfine; 10-09-2011 at 11:19 AM.
 
Old 10-09-2011, 06:40 PM   #9
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,128

Rep: Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121
Try one of my old favourites, cgroups.
Personally I like to limit tasks to a subset of cores/CPUs - enables me to monitor tests unimpeded on other core(s). Memory can also be (separately) managed as well these days - including swap should you feel the need. See ./Documentation/cgroups, or this LWN article.
 
Old 10-10-2011, 12:14 PM   #10
Super TWiT
Member
 
Registered: Oct 2009
Location: Cyberville
Distribution: Debian
Posts: 132

Original Poster
Rep: Reputation: 16
Actually, johnsonfire, setting a RAM limit in /etc/security/limits.conf does correctly limit ram usage. I tried limiting the amount of ram a test user could use, so I set the amount of ram to 2 mb as a test, and it couldn't even login, it said "resource unavailable". So, yes /etc/security/limits.conf does work. But you are right limiting the ram might not be a good idea anyway, as I don't want blender to crash. However, in limits.conf, you can change a user's overall process priority, which includes its RAM, I/O, and CPU usage. This kinda helps make sure I get the ram I want (which isn't that much usually).

Last edited by Super TWiT; 10-10-2011 at 12:21 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to limit resources? / mod_perl Piotr_Comarch Linux - Software 1 11-15-2010 07:20 AM
How to Limit server resources to users? deploy_update Linux - Server 3 10-27-2009 09:50 AM
Limit Resources with Webmin stuffradio Linux - Security 1 06-19-2007 04:41 PM
LXer: Purdue University Expands Advanced Computing Resources With SGI Technology LXer Syndicated Linux News 0 10-18-2006 11:54 PM
X taking too much computing resources dualcore75 Linux - Software 3 07-20-2006 01:10 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 09:36 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration