[pthread] It is necessary to lock the mutex BEFORE using the condition variable?
Is the following reasoning correct? If not, then please correct it.
Quote:
|
The short answer is 'no'. The less short answer is 'like many things, it depends'.
The long answer, though is filled with considerations about requirements. Do you _need_ to keep in sync on shared 'x'? If so, you'll need some synchronization primitive around it, although where that goes, what it is, and how you use it are all dependent on the access scheme you decide on (look into lock free data structures if you want some more information on synchronization without using pthread mutexes). If you don't really require the synchronization, you don't need to use a sync primitive. If you do require sync, you do need one. It's as simple as that. If your condition needs to have a synchronized "test-and-set" - you need a synchronized test and set. |
Is it correct that you ask is this:
- 3 threads are in wait state, waiting until the condition becomes true - thread #4 doesn't wait, but does check the condition and accesses the file if the condition is true. Well, in that case, thread 4 must invalidate the condition before writing. If, in the mean time thread 1-3 already started, thread 4 will not be able to invalidate the condition, and hence will wait until it can. Maybe the error in reasoning you make is that you forget that you must not just wait until a condition becomes true, you must also set a mutex before you access a shared resource. The good news is that testing and setting a mutex is atomic, it is not possible that you test a mutex, get a valid return value, and get interrupted before you can set the mutex by another task. Maybe this helps: https://computing.llnl.gov/tutorials...#MutexOverview jlinkels |
Quote:
I also meant the same. Quote:
Quote:
The only place where there is no mutex is the point from where we "send" the signal. When a thread enters the critical region, and gets to the point of wait API (starts waiting), the wait aPI again unlocks the mutex. So, the fourth thread gets in the critical region, condition becomes true, so the 4th thread doesn't wait and straightway starts writing to the shared x. Now, the signal is sent to the wait API and the wait API frees the 3 sleeping threads. Now, there is a race condition between 3 threads and the 4th one. Is all this correct? |
Quote:
Mutexes are NOT multi-threading. |
I don't really understand your code, but I can tell it is wrong. Either don't use synchronisation at all, or use it correctly:
Code:
if (condition) { |
Agreed.
You cannot "know if a condition is true-or-false" if the variable that you are testing is designed to be protected by a mutex: you do not know whether the value is stable... you are "racing" on that variable. Having said that ... sometimes, for efficiency's sake (i.e. to minimize the number of trips through "atomic" mutex code...) you do elect to accept a certain amount of racing. Kind of like the person who zips past a line of patiently-waiting traffic on the highway and darts on ahead. Sure, such a thing might :mad: royally piss-off :mad: you or me, but a computer's case might be different. There are two different purposes for mutual-exclusion:
In the second case, I use mutual exclusion primitives to allow a process to wait until "it might have something to do." (That is to say, to cause it to wait when it "definitely does not have anything to do.") The process then wakes up, sees if it has anything to do, and if so does everything that it does have to do, and then it goes to sleep again. What are the odds that it'll instantly wake-up (because of some signal that had been posted in the past while it was working through its list), check and find that this time it has nothing more to do? Very likely... and, who cares. We don't care about that. But what we do care about is indefinite postponement: where the process does have work to do but is sleeping on the job. So, don't worry if it wakes up a little bit too often; do worry that it does not wake up often enough, and that it does not gobble up CPU time in the act of doing nothing. |
Quote:
Quote:
jlinkels |
Quote:
Quote:
I have the idea you are mixing up conditions, wait-for conditions amd mutexes. You should not do that, instead, you should programming threads as straigthforward as possible. That means: - use mutexes around accessing a shared resources, e.g. variables, files, records etc. - use a wait conditions if you want to wait for something to happen before you can proceed. You can mix the two: use a wait condition while waiting for some flag to set or unset while you access a shared resource. That is usually incorrect and can lead to the most difficult, unpredictable and random errors. If more than one thread can access a resourrce, and the resource is being accessed and correctly protected by a mutex, the next task will sleep until the mutex becomes available again. So you don't have to wait for a condition. If you have designed your program such that a race condition can be raised if some task doesn't wait and proceeds directly processing, your design is flawed. jlinkels |
Quote:
1) The only way 2) The most efficient way. Again, look at things like tcmalloc that provide guaranteed, synchronized, thread-safe access to shared data structures without using mutexes. Quote:
Quote:
This all comes with a caveat that you need to know the design of the system, end-to-end. So in the generic, must code to all platforms and all permutations of memory model, then I'll concede that mutexes are the only way to do test-and-sets. Even then, for open-source, one can rely on gcc's compare and exchange intrinsics. Even Microsoft includes InterlockedExchange. I've come from enough projects where people over-use mutex and break performance requirements that I try and get people to think about what they need the mutex to protect rather than the blanket statement of "oh, it's shared, so mutex it." |
Quote:
The mutex functions in the pthread library are not the only way. But a mutex mechanism as such by locking access to a certain resource before actually accessing is fundamental for multitasking/multithreadinf operating systems. This has been discussed by Edsger Dijkstra in 1964 and is still valid. There are various ways to implement the mechanism, varying from very down to the bytes to high level abstraction. Since the pthread library comes with overhead, it does not necessarily mean it is the most efficient in execution time. There is always a trade-off in execution time versus abstraction. Quote:
According to some, one context of execution is better than using threads at all. But that is not the discussion. Quote:
Quote:
jlinkels |
Well, the OP mentioned shared files, so the serialization cannot be substitued with 'TS', 'CS' or 'CMPXCHG' instructions, but it is true that in some cases they can be very useful.
|
We've hijacked Anish's thread for long enough, so I'm only going to post one more time to this.
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
The notion that orgcandman is referring to is based on the fact that the actual probability of interference with any particular queue-operation is very slight indeed ... "slight" enough to be dealt with using what is basically a busy-wait loop. The approach is taken in a well understood edge case in which so many queue operations are performed per-second that the overhead of system calls is undesirable and only "slightly" necessary.
Microprocessors do implement "atomic" memory operations, e.g. the LOCK prefix of the x86 architecture, and libraries expose subroutines which employ them. The concept and the implementation are quite similar to, but not exactly like, that of a "spin lock." In the old IBM mainframe architectures, these instructions were called compare and swap, and I mention that because I think the name is quite descriptive. In one, atomic operation, compare the value in a memory-location to what is expected, and if so, replace it with something else (otherwise don't). The instruction is executed in such a way that even on a multi-CPU system the instruction ... which IIRC is non-privileged ... will execute correctly on any of them. The notion of using this to maintain a shared queue is so straightforward that many programming textbooks include it ... yet, at one time, IBM actually patented it. :rolleyes: |
This is my understanding now:
It is necessary to lock the mutex variable 'before' attempting. to make the threads wait. Example scenario: - Thread A wants to do something once a variable `count` is non-zero in `functionA`. - Thread B will signal when it increments the variable `count` (which will set variable `count` to something other than zero) in `functionB`. Since one mutex variable cannot be locked by more than one thread at a time, it is a possibility that `ThreadB` might increment the `count` variable and send the signal /before/ the `ThreadA` waits on the particular condition variable. So, now `ThreadA` unaware of the fact that the signal has already been sent will wait on the condition variable for ever and ever. A `pthread_cond_signal ()` doesn't 'persist' - if there are no threads waiting on the condition variable declared by `pthread_cond_t`, the signal gets lost and cannot be recovered. So, the mutex lock when acquired by `ThreadA`, will prevent `ThreadB` from incrementing the `count` variable (and sending the signal) until `ThreadA` waits on the condition variable (which will then automatically release the lock for `threadB` to access). Quote:
|
All times are GMT -5. The time now is 08:38 PM. |