Why does bash take 64meg vmem on 64bit vs 5meg on 32bit?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Why does bash take 64meg vmem on 64bit vs 5meg on 32bit?
I have 2 identical installs, 1 is 64bit, 1 is 32bit. Start a simple bash shell and check it's memory, the 64bit system uses 64meg, the 32bit systems uses 5meg.
I am wondering if that's just the way it is or if there's a setting for this.
I login with a default bash shell.
echo $$
ps -u -p pid_from_$$
The memory I'm looking at is the VSZ. I'm working with limiting users virtual memory footprint and this can add up if you process has many child processes.
Oops! I was misinterpreting the output. I did the same comparison you did and I'm seeing similar results.
But notice the RSS isn't much bigger for 64-bit than it is for 32-bit. Only the VSZ is much bigger.
I know this does you no good for purposes of limiting user virtual memory, but that large amount of memory reported for bash isn't real. I'm pretty sure it not only isn't taking up physical memory, it also isn't backed by anything (swap partition or original image, etc.). It is probably "demand zero" memory.
It is frequently useful in programming to allocate big chunks of demand zero memory that probably won't get used. In 64-bit programming there are even stronger reasons to do this and less reason not to.
Demand zero memory that isn't actually used presents very little cost to the OS. The OS does need to manage the "Over commit" issue and that might even be what you're trying to deal with by limiting virtual memory. But that should be dealt with some other way, rather than restricting this powerful programming practice.
I don't know any way in Linux to get user memory totals in more meaningful categories. Without more meaningful categories, limiting or charging for memory will be based on unsound metrics and if it changes behavior that won't tend to be in a desirable direction.
Probably larger shared libraries. Have a look at the last line from "pmap -d <pid>".
Probably a pointless exercise trying to restrict the virtual size of something like a shell.
Very interesting results (at least for me). But forget the last line. The interesting part is the one line with a really big value for Kbytes.
Now can you tell us what /usr/lib/locale/locale-archive is? (I tried a google search before asking and found that it is a common question, but didn't find the answer). For an example of someone asking this question with detailed supporting info, see: http://lkml.org/lkml/2005/9/30/82
The 64-bit bash has that file mapped. It is most of bash's total virtual size. The 32-bit bash doesn't have it mapped at all.
The 32-bit system has the identical size /usr/lib/locale/locale-archive as the 64-bit system. The 32-bit bash just doesn't have it mapped.
On my system, there is lots of excess memory, so why is this mapped and nonresident. IIUC, if it had ever been accessed it wouldn't have been paged out yet, so it would still be resident. So I think I can deduce that bash mapped it but never used it.
Next question: why is it in /usr/lib not in /usr/lib64? I thought 64-bit stuff was in /usr/lib64
Quote:
Probably a pointless exercise trying to restrict the virtual size of something like a shell.
I assume the point was to restrict the total virtual size of all concurrent processes of one user. Since the shell's virtual size is so far in excess of the memory resource it actually uses, that distorts the whole process of restricting the total.
If it hasn't been referenced, it hasn't been paged in yet. Anything R/O that is mmap'd (and in storage) will simply be discarded if the storage is required by someone else - no page out required.
This _x64 Ubuntu (Hardy) doesn't have that file mapped, and a background sleep is quite small - about 9 Meg. But again that's virtual, so who cares. The reason I mentioned the last line of the "pmap -d ..." is to emphasize the difference between virtual space and storage that needs to be "backed".
Basically unreferenced virtual space is "cost-free" - there are better things to worry about from a performance/tuning point of view. What constitutes the (total) used memory for a process (in Linux) is highly debatable. The kernel devs seem to have accepted a recent definition - I have my doubts; IMHO it's likely to just cause more confusion.
In a multi-lib implementation, there is no requirement for all modules to be 64-bit; those that need to be are, the others are optional.
Update: Fedora9 _x64 shows this mapping. Bumps the virtual to over 80 Meg.
The reason I mentioned the last line of the "pmap -d ..." is to emphasize the difference between virtual space and storage that needs to be "backed".
I don't know what resource the OP cares about that motivates the idea of limiting virtual memory.
Maybe that resource is swap space, in which case that distinction on the bottom line of pmap matters (but still isn't a very accurate measure).
Maybe that resource is page thrashing (the time involved in replacing pages in physical memory with pages on disk). So, IIUC, that distinction on the bottom line of pmap wouldn't matter at all, while total virtual memory would be such an inaccurate measure of the resource he actually cares about that measuring it would be useless.
Quote:
Basically unreferenced virtual space is "cost-free" - there are better things to worry about from a performance/tuning point of view.
I've been saying that from near the start of this thread, and I still would guess that the OP's plan "limiting users virtual memory footprint" is misguided. But we have yet to hear from the OP on what he actually hopes to accomplish by doing that. I can't think of a reason that wouldn't be totally invalidated by the things we discussed above. But what reason has the OP thought of?
I have a grid environment with machines that have 8 cores and 32gig of memory. Each job slot requires 1 core and 2gig, any job that goes over its memory basically take 2 jobs slots. If I set vmem limits in the grid it sets the ulimit on the job. It seems the ulimits on RSS is not enforce in linux so I went after vmem.
I don't know what that means, so I expect I'm missing an important aspect of your problem.
Quote:
any job that goes over its memory basically take 2 jobs slots.
Is that primarily a performance issue or primarily a billing issue?
For billing, you probably don't want a job to get more than its share even if (due to light loading elsewhere) getting more than its share wouldn't have negative impact on anything else.
For performance, you don't want a job to get more than its share only because it is so hard to address the question of whether its getting more than its share would have a negative impact on other jobs.
Quote:
It seems the ulimits on RSS is not enforce in linux
What testing or other investigation did you do to find that fact?
I did some google searches and found that claim backed up only by testing that completely ignores some of the differences between physical and virtual memory. The observation that a limit on RSS fails to limit vm was given as support for the claim that it isn't enforced.
I don't know whether ulimits on RSS is enforced. I've never tried it. Measuring physical memory per job is a tricky question, so enforcing it would be a tricky process and at best approximate. But if it is enforced at all, it might be a better fit for your needs. Don't write it off just on the confused discussions you might have found with a google search similar to the one I just did.
The grid is a Sun-Grid-Engine setup with over 1000 cores running batch jobs, but that is not an issue. You answered my question with the locale-archive being mapped. I guess I have to work on that issue now.
ulimit on RSS should be enforceable (but not against the root user), although I've never used it. Have a look at cgroups - I use it to limit CPUs, but recently the memory manager has made its way into the mainline. That way you can enforce (real) memory limits by user/group whatever - all driven by echo into sysctls, so easily scriptable, and updatable in need.
See ../Documentation/cgroups.txt for the basic idea.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.