Quote:
Originally Posted by aral
1.What is the difference between the acces time for a process to acces a shared memory segment and the acces time for a process to access its own local memory on a Linux system? As far as I know the access time for a process to a shared memory segment is miliseconds, is that right ? if yes then what's the access time for a process to acces its own local memory ?
2.Is it faster , at run time, to use the simple functions for Posix shared memory(like shm_open )wrapped around a class instead of using a stuffy shared memory library like boostshmem ? If yes , then is the difference considerably faster or not ?
|
The "access time" of memory depends upon
locality of reference, which is the base assumption that drives the virtual memory system,
viz: "The
next memory address requested by this program is probably close to one of the
recent memory addresses requested by this program." The virtual memory system tries to identify the current
working set of pages needed by this process, keeping recently-used pages instantly available and allowing older pages to gradually disappear from the process's working-set.
What you are trying to avoid in all cases is a
page fault,, where the page you need isn't in real-memory and the virtual memory system has to go out to disk (or whereever) and get it.
It really doesn't matter
how the page came to be part of your working set. Shared memory and non-shared memory wind up the same way and perform the same way.
The working-set phenomenon favors memory access-methods that tend to exhibit locality-of-reference. For example, singly-linked lists aren't very efficient: you might concievably page-fault on every single step. A tree or hash-table structure that kept its own small pool of MRU (most-recently-used) nodes would be very efficient, because it could zero-in on the desired data with a minimum of memory references.
In designing your system, I simply recommend
simplicity and
clarity over excessive obsession with "speed." After all, if you write something "very clever" and then spend three or four hours debugging it ... that's 14.4 billion microseconds of potential computer time (and valuable
hair!) wasted.