ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I need to write a complex data processing system with linux 2.6.xx, with
consists of different processes. Two of them using a small shared memory region to share sampled data with a rate of 50 msec.
The shared segment will be opened with mmap referenced to a file on a compact flash card.
From the mmap function description I read, that the data written by process A will be seen from process B "immediately" which will be true,
and also that the underlying file (referred fro mmap) will only be written if one of the processes will unmap the shared region or calling msync.
If I do a look onto the filesystem with ls -l on the referred file
of the shared segment, I see that the file file stamp reflects always the actual time, so it seems that the file will be updated continously,
which will be a disaster using a cf card and the necessary sampling rate.
Is there any possibilty to control (disallow) the system to flush the dirty region of the shared segment to the file ?
I need to write a complex data processing system with linux 2.6.xx, with
consists of different processes. Two of them using a small shared memory region to share sampled data with a rate of 50 msec.
The shared segment will be opened with mmap referenced to a file on a compact flash card.
From the mmap function description I read, that the data written by process A will be seen from process B "immediately" which will be true,
and also that the underlying file (referred fro mmap) will only be written if one of the processes will unmap the shared region or calling msync.
If I do a look onto the filesystem with ls -l on the referred file
of the shared segment, I see that the file file stamp reflects always the actual time, so it seems that the file will be updated continously, which will be a disaster using a cf card and the necessary sampling rate.
Is there any possibilty to control (disallow) the system to flush the dirty region of the shared segment to the file ?
any help appreciated,
Thanks in advance
Regarding the item in bold - why do you think so ? I.e. I don't think the file in question belongs to CF filesystem. I.e. I don't think /dev/shm* belongs to your CF filesystem. I.e. you need 'man shm_open'.
I don't think /dev/shm* belongs to your CF filesystem.
That's correct, it doesn't.
Quote:
Originally Posted by Sergei Steshenko
man shm_open
That is one way to share memory.
Quote:
Originally Posted by kam2630
The shared segment will be opened with mmap referenced to a file on a compact flash card.
This is a completely different way.
If there is no real reason to use a CF card, he can either do what he's doing now but using /dev/zero, or he can write completely different code to use shm_open() and friends.
Unfortunately, I have no answer to the original question, but I wanted to clarify the issue of CF flash card versus shm_open().
The only other thing I would suggest is that if there is any way to monitor statistics (perhaps through /proc or something, maybe that would be the way to monitor actual hardware throughput. I don't know, and I'm hoping that someone else will step forward with an answer.
If you've got a time-intensive requirement for updating data, it puzzles me why you are posting the data to a flash card. That's dependent upon a "physical" I/O operation, to a capacious-but-slow device. (Even if it appears to be "RAM," you know that in order to actually get to it you must perform USB I/O operations... and you know that the cycle-time of the memory on that card is relatively slow.)
I suggest that you simply map a shared memory segment... in RAM. If you need to post the data to a flash-card, dedicate a separate thread to the task of flushing the data to the card.
As I envision it, this separate thread would periodically grab the data from the shared-memory (using a mutex, of course, to assure data integrity), and it would move that "snapshot" off to another memory area for its own use. Having done so, it would release the mutex and then push the "snapshot" out to the card. (This "two-step" arrangement avoids having the mutex remain locked while the I/O is taking place.)
In this way, the data on the card would be reliably kept up-to-date, but none of the data-collecting processes would ever have to wait on it. System performance will remain stable and predictable. The card will not overheat.
Last edited by sundialsvcs; 03-25-2009 at 09:32 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.