ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a program with 2 open sockets and multiple threads that either read, write, or read and write to these sockets. I suspect I need mutexes to provide mutual exclusion. Do I need to use a mutex on a thread that only reads from a socket, or only on threads that write to the socket?
I suspect I'd need to use a mutex in each case, but thought I'd ask here first. Thanks for any help.
Hi ,
I think u will need mutexes for both because multiple threads are going to read from the same socket and write to the same socket in the sense both are going access a peice of socket code at the same time, hence u will have to implement for both read and write.
Hi thanks for your reply. I made a mistake in my original post. Only 1 thread will ever read from the socket, but multiple threads will write to the socket. I believe I only need the mutex used on any write calls. If this is not correct, I'd appreciate any responses. Thanks.
Hi,
In that case i think that case u need to implement mutexes for writing to the socket, your assumption is correct only when we have multiple threads accessing a particular line of code we need to protect them using mutexes otherwise it is not needed.
Based on what you've said, I think you'll probably have to synchronize against all three threads. If this were a shared memory object, of course, you'd just need to synchronize the writer: all readers could run concurrently...
BTW:
You had an earlier post about using 300++ threads, and someone else had mentioned the idea of "thread pooling". If you find your application using many threads (say, in excess of a dozen or two), or you find your application constantly creating and destroying a lot of threads, thread pools might be extremely beneficial. And it's often no big deal to implement your own thread queue: you don't necessarily need a framework or an API - you might just as easily be able to "roll your own" quite easily.
I like to have one thread be the broker for inbound requests, and another (one) thread be the broker for outbound. The inbound-thread reads a block, decides if it represents a new request or some information pertaining to an existing one, and places it on the appropriate queue .. then waits again. The outbound-thread simply waits for something to send, and sends it.
New requests are sent to a thread which is basically the master scheduler: it can refuse the request, place it on a deferred list, or place it on the active work list.
There are a pool of threads who are workers. They wait for some request which they can handle (or, contribute to), select it, work on it, and place it either on a completed-work list or back on the active-work-in-progress heap.
At any time, there may be many more requests outstanding than there are threads working on them. In fact, this is expected. So we keep running statistics that let us measure how long each request is having to wait, what backlogs are building up on the various queues and so-on.
What this gives you is an architecture that responds gracefully under load. The system does not demand more than the hardware can deliver. As load increases, service times do decline but they do not "hit the wall and die."
For large systems, what I've basically described is what's called a "transaction processing monitor." Commercial and open-source examples are numerous. One of the first, CICS, was designed in the 1960's by IBM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.