Originally Posted by BHABANIPRASADPATI
Wow... nice one.
Did you notice the thread was old and
the linked content was effectively very old (nearly obsolete even when it was posted).
It is even more true now that there is no way of providing memory use statistics that are both simple and correct. But the reasons have evolved.
It is still true that a lot of code is shareable and there are no good statistics on how much it is shared. When a process uses shareable code, that gets entirely counted in that process's memory use statistics regardless of whether it is also counted in other processes' memory use statistics.
But that has become a very small share of virtual memory use, especially on 64 bit systems.
For example, I just looked at the biggest virtual memory users on one of the shared systems I'm logged into at the moment. The biggest one, at 419MB, is identified in top as /usr/bin/sealer but is actually /usr/bin/python (I assume python sets its argv based on what python program it is running).
I've seen many questions online (without great answers) about what that process is and why it is using so much virtual memory. I'm curious myself but don't have time for a serious investigation.
On my system, I see that only 11MB of that 419MB is resident and only 4.6MB is shareable. The entire moderately loaded multi user Centos system is using only 128MB of swap space, which I expect is mainly other tasks, not that 419MB task. But anyway, after including all the .so mappings (which are in that shareable 4.6MB) plus all of the resident memory (which already double counts some of the .so mappings) plus all of the swapped out memory, most of the 419MB is still unaccounted for.
In many cases the large blocks of memory that are in virtual size but no where else are "demand zero" memory. I haven't taken the trouble to figure out where the invisible majority of the 419MB is in this example.
Maybe long ago, the level of actual sharing of .so mappings was a significant fraction of the cause of confusing memory statistics. Maybe long ago influencing that level of sharing, as suggested in that article, by sticking to one version/type of desktop software, was a significant factor in improving memory efficiency.
But now, physical and virtual memory sizes have both grown so much faster than .so mapping sizes, that those things really don't matter.