LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   Linux stack (https://www.linuxquestions.org/questions/linux-general-1/linux-stack-4175455812/)

shp47 03-27-2013 12:48 PM

Linux stack
 
Hello,

I have a question about Linux's user space stack
While stack size for user space is set to default (8M), if I run the
application, I see that just %1 of stack is used (information from pmap),
now if I change the stack size to 32K (using ulimit), then almost %85 of
stack will be used. So my questions are

- What are those percentages reflect?
- Will it effect the application's execution?

Thank you in advance.

pan64 03-28-2013 05:56 AM

the usage of the stack changes. So you cannot compare those numbers directly. An app will die if there was no space for stack (out of memory).

shp47 03-28-2013 11:13 AM

Quote:

Originally Posted by pan64 (Post 4920434)
the usage of the stack changes. So you cannot compare those numbers directly. An app will die if there was no space for stack (out of memory).

Thanks, that's right.
What I did, I just start the program and program will wait for a key input. So in both cases, application is in the same situation, but with different maximum stack size. So the question is

Why %80 of stack is used when the maximum size of it is 32K and in 8M maximum size of the stack, just %1 is used. Does this means that for this app we need at least 28K stack (%80 of 32K), otherwise there is a good chance of stack overflow or crash? And how come when the maximum stack size is set to 8M, kernel gives the app 132K? Where these numbers come from.

Thanks you for reply.

jpollard 04-01-2013 07:58 AM

It is how percent is calculated: used/maximum * 100. You changed the maximum to a smaller value, therefore the percentage of the maximum is larger. For a numerical equivalent (5/100)*100 = 5%. (5/10)*100 = 50%

johnsfine 04-01-2013 09:00 AM

Quote:

Originally Posted by shp47 (Post 4920593)
how come when the maximum stack size is set to 8M, kernel gives the app 132K? Where these numbers come from.

Where did you get that 132K from?

The amount of stack used is typically not affected by the max stack usage limit you have set (unless the limit is exceeded).

You have reported 28K stack used when the limit is 32K, so we would expect the same 28K would be used for any higher stack limit.

shp47 04-01-2013 01:02 PM

Quote:

Originally Posted by johnsfine (Post 4922754)
Where did you get that 132K from?

The amount of stack used is typically not affected by the max stack usage limit you have set (unless the limit is exceeded).

You have reported 28K stack used when the limit is 32K, so we would expect the same 28K would be used for any higher stack limit.

pmap {pid} gives me that info (stack size). So 132K reported on 8M maximum and 28K on 32K maximum.
That's my question too, why it is changed when I increase the maximum.

shp47 04-01-2013 01:06 PM

Quote:

Originally Posted by shp47 (Post 4922922)
pmap {pid} gives me that info (stack size). So 132K reported on 8M maximum and 28K on 32K maximum.
That's my question too, why it is changed when I increase the maximum.


for 8M
401af000 40K r-x-- /usr/lib/libgcc_s.so.1
401b9000 28K ----- /usr/lib/libgcc_s.so.1
401c0000 4K r---- /usr/lib/libgcc_s.so.1
401c1000 4K rw--- /usr/lib/libgcc_s.so.1
401f3000 80K r-x-- /lib/libpthread-2.8.so
40207000 28K ----- /lib/libpthread-2.8.so
4020e000 4K r---- /lib/libpthread-2.8.so
4020f000 4K rw--- /lib/libpthread-2.8.so
40210000 8K rw--- [ anon ]
40212000 1188K r-x-- /lib/libc-2.8.so
4033b000 28K ----- /lib/libc-2.8.so
40342000 8K r---- /lib/libc-2.8.so
40344000 4K rw--- /lib/libc-2.8.so
40345000 12K rw--- [ anon ]
40348000 4K ----- [ anon ]
40349000 8188K rw--- [ anon ]
40b48000 4K ----- [ anon ]
40b49000 8188K rw--- [ anon ]
be9ee000 132K rw--- [ stack ]

For 32K

40131000 40K r-x-- /usr/lib/libgcc_s.so.1
4013b000 28K ----- /usr/lib/libgcc_s.so.1
40142000 4K r---- /usr/lib/libgcc_s.so.1
40143000 4K rw--- /usr/lib/libgcc_s.so.1
4015e000 476K r-x-- /lib/libm-2.8.so
401d5000 28K ----- /lib/libm-2.8.so
401dc000 4K r---- /lib/libm-2.8.so
401dd000 4K rw--- /lib/libm-2.8.so
401de000 4K ----- [ anon ]
401df000 28K rw--- [ anon ]
401e6000 4K ----- [ anon ]
401e7000 28K rw--- [ anon ]
40256000 80K r-x-- /lib/libpthread-2.8.so
4026a000 28K ----- /lib/libpthread-2.8.so
40271000 4K r---- /lib/libpthread-2.8.so
40272000 4K rw--- /lib/libpthread-2.8.so
40273000 8K rw--- [ anon ]
40275000 1188K r-x-- /lib/libc-2.8.so
4039e000 28K ----- /lib/libc-2.8.so
403a5000 8K r---- /lib/libc-2.8.so
403a7000 4K rw--- /lib/libc-2.8.so
403a8000 12K rw--- [ anon ]
bef96000 28K rw--- [ stack ]
ffff0000 4K r-x-- [ anon ]

pan64 04-01-2013 04:20 PM

I assume it is the (default?) reserved stack size, not the really used one. In case of 8M bigger default was used. you can try to print the real stack in your code. But I'm not really sure, just an idea...

jpollard 04-01-2013 05:31 PM

I believe what you are seeing is a mirage. The 132K is available memory for use without requiring a page fault. With a smaller maximum, it also reduces the available without a page fault.

This would minimize page faults for those processes that don't use that much.

BTW, the kernel code that allocates this should be in fs/binfmt_elf_fdpic.c. This file gets the initial stack allocation from either the applications ELF header, OR from the computed size of the process invoking the new code, so some variations can exist depending on how they are started, or how the invocation is done.

When the allocation is taken from the parent process, then it will depend on the past history of the process as to how much stack is to be initially granted (remember, first a parent process does a fork, then it does an exec - so the stack at the time of the exec depends on the parent process with a copy-on-write flag). This only has to do with the initial allocation of the stack size - actual usage of the stack can vary. Pages are only allocated when actually used. So the initial stack space may easily be a number of pages just to provide a starting point (the absolute minimum is 2 pages, 8K).

This is NOT (I repeat, NOT) to be taken as gospel - I haven't traced the entire execution path from parent to child to see what the stack size actually will be. There are issues because an initial physical allocation of new pages may depend how the process doing the exec is running...

Perhaps stevea can give a more authoritative information.

sundialsvcs 04-03-2013 08:22 AM

Remember that user-space memory is virtual. It does not occupy physical resources until a page-fault occurs. A megabyte-sized stack area does not mean that a megabyte's worth of RAM is being used by it. It simply means that if a process goes into endless-recursion mode it's gonna die when it hits that limit.

This can also impact the design of a program, e.g. with regard to local variables. You can't just allocate a megabyte-sized array as a local variable, because that comes from the stack. But you can allocate a megabyte using "malloc()" or somesuch, and refer to it using "a pointer to a megabyte-sized array," because that space does not come from the stack area.

jpollard 04-03-2013 05:58 PM

Quote:

Originally Posted by sundialsvcs (Post 4924290)
Remember that user-space memory is virtual. It does not occupy physical resources until a page-fault occurs. A megabyte-sized stack area does not mean that a megabyte's worth of RAM is being used by it. It simply means that if a process goes into endless-recursion mode it's gonna die when it hits that limit.

This can also impact the design of a program, e.g. with regard to local variables. You can't just allocate a megabyte-sized array as a local variable, because that comes from the stack. But you can allocate a megabyte using "malloc()" or somesuch, and refer to it using "a pointer to a megabyte-sized array," because that space does not come from the stack area.

It doesn't matter where the MB is from. Only its use. A MB allocated on the stack is automatically reclaimed when the function exits. A MB from heap allocation has to be explicitly deallocated.

Nothing prevents you from allocating a MB on the stack... if that is what you want, and your stack ulimit is not exceeded by actually using it.

johnsfine 04-03-2013 06:15 PM

Quote:

Originally Posted by jpollard (Post 4924626)
Nothing prevents you from allocating a MB on the stack... if that is what you want, and your stack ulimit is not exceeded by actually using it.

I think the point was that your stack ulimit often is exceeded by large data structures your program wants to use, but the malloc pool typically is much bigger. Most users/programers either don't know how to adjust the stack limit or don't want to add that extra complexity to their task. (including complexity from user and programmer not being the same person).

So it is typically simpler to just say really big data structures must be allocated from heap not stack.

jpollard 04-03-2013 07:14 PM

And yet, the programming of using the stack is simpler.

There is no need to have to track all uses of the stack variables as long as they are in the context of the stack frame.

Using the heap you have to be sure that every allocation has an appropriate deallocation, in addition to the normal range limits.

Without being careful, you create things really hard to find - like memory leaks.

Been there, done both. It all depends on the problem being solved.

sundialsvcs 04-03-2013 08:31 PM

That's exactly what I meant, Johnsfine ... thank you.

The stack-space is generally intended for local variables, which as you say are automatically cleaned-up in a very convenient way as subroutine enter and exit. This is therefore entirely suitable for data-structures of "moderate size."

Of course, in languages like C++ or typical scripting-languages, you don't have to think about that too much: there are "constructors" and "destructors," and "access methods," that let you very-conveniently work with arbitrary data-structures without having to give them too much thought. And it just so happens that, while the object-instance might live on the stack, it automagically allocates and cleans-up its memory from the much-larger heap.

jpollard 04-04-2013 07:21 PM

Sometimes yes, sometimes no.

It depends entirely on the application and how well it handles exceptions. I have seen some nasty memory leaks from C++ based applications, just as I have seen them from C.

The problem with dynamic memory allocations is that handling it is never simple. It always starts off simple, and gets more complex as projects grow.

Eventually, you find you need a real garbage collection to be able to recover lost memory.


All times are GMT -5. The time now is 07:22 AM.