How would you design a concurrency system for a linux shared memory ?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
How would you design a concurrency system for a linux shared memory ?
How would you design a concurrency system for a linux shared memory given that-
-there are 1000s of readers
-because of the large num of readers and writers, they should not be allowed to obtain locks and block large number of threads.
The mere statement that "there are hundreds of readers" does not mean that any of them will be reading or writing data at the same time. (In fact, on a typical system with one TCP/IP interface card, by definition there can be only one I/O operation in process at the same instant.) Therefore, a single thread running a simple select() loop can efficiently process pretty-much any number of connections. (Sometimes you dedicate a single thread to handling new connection requests.) These are by definition "I/O-bound" operations.
The receiving threads can examine enough of the incoming data to, by some appropriate means, determine that the incoming data is valid and to associate it with some in-progress unit of work. (Often there is a central shared "tote board," protected of course by a semaphore, that manages the instantaneous status of the various units of work for all to see.) The thread then posts the unit of work to some suitable FIFO queue or queues, for consumption by a "thread pool" or a "process pool" of workers whose job it is to listen for work on one-or-several queues and to service the work-requests found there, placing work on some other queue or queues as necessary. A small set of "writer" processes send the completed work back to the remote clients. (Notice how the worker processes, although fed by incoming TCP/IP and producing outgoing TCP/IP, do not initiate that network-I/O themselves. This is in part to give them a very consistent workload pattern as perceived by the operating system dispatcher, among other reasons.)
If all of what I am describing sounds like "a well-known task," you are entirely correct. There are, in fact, entire "application server" systems (e.g. JBoss, or IBM's legendary but still very much reliable CICS system), and "workflow management" systems, which provide this plumbing for you. In the (for example) Perl programming world, there are systems like POE. And so on. Apparently-thorny issues such as "how do I handle shared memory on thus-and-such system" are, believe it or not, abstracted away! (And very-thoroughly solved.)
The admonitions to avoid "a large number of threads" and "widespread locks" and so on are sage advice. Heed them well. But also bear in mind that this is a very well-trod path with many highly successful high-volume systems to be found there. You don't have to invent something new.
So... "How would I design a...?" So to speak, "I wouldn't." Because, so to speak, I wouldn't have to. Not from scratch, anyway.
Last edited by sundialsvcs; 08-12-2012 at 09:30 PM.
Some people may interpret the goal of what you are asking as applying largely to Users external to the machine on which shared access is to be implemented, where access might be via TCP/IP. They might interpret "concurrent" as "simultaneous", and "locks" as a literal term.
So perhaps you could give us more details about what you are trying to accomplish, especially the limitations, and how you are using terms. For example, are using concurrent, in the sense of, multiple operations "in progress" over the same general time period, or to mean, simultaneous, taking place in the same instant. Concurrent as in, even with a single CPU, and a single threaded controller for some resource, multiple operations can be in progress at the same time, but in such a case, usually only one operation will be physically executing, simultaneously, in a single instant. But with multiple CPU's, multi-threaded controllers, "dual-ported", or "multi-ported" memory, etc., multiple operations can be physically executing, simultaneously, in a single instant.
If the question is for a school thesis, then maybe it could be generally phrased as, "What's a good design for a shared memory concurrency system"? On the other hand, if you are doing research to implement a special application for a professional project and just want very little delay in accessing shared memory, you might be able to use a so called "Real Time" form of Linux.
But in any case, the idea that readers and writers of shared memory "should not be allowed to obtain locks and block large number of threads", may be something of a problem. Are you using the term lock to make a distinction such as, a mutex lock is not OK, but a semaphore could be OK? Or are you instead using the term lock just as a general term to mean that one reader or writer of shared memory, cannot prevent access by another reader or writer, even temporarily? Or to mean something else altogether?
Usually there would have to be some way of coordinating shared access to a single physical resource, like a single collection of memory locations. So it's one thing if the requirement is that threads, not block, keep running at an OS level while waiting to access shared memory, as long as they have something else to do while waiting. If that's what you need, you might want to take a look at the Linux man page for pthread_mutex_trylock.
But if you truly have a requirement for no delays in accessing shared memory, that's a different problem.