LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   Can two threads safely work on a linked list without locking? (https://www.linuxquestions.org/questions/programming-9/can-two-threads-safely-work-on-a-linked-list-without-locking-843309/)

yaplej 11-09-2010 03:46 PM

Can two threads safely work on a linked list without locking?
 
Hey,

I have a linked list that two threads work on simultainiousley. The first thread is adding elements to end the linked list while the second is removing them from the front. Can this be done without a lock on the linked list head when elements are being added/removed?

I think this lock is causing a performance hit to my application. If there isnt any safe way without it then thats fine but just thought I would check.

The first thread uses this fuction to add elements to the list. Full source here.
http://opennop.svn.sourceforge.net/v...h?&view=markup

Code:

/* Lets add the new packet to the queue. */
pthread_mutex_lock(&workers[queuenum].queue.lock); // Grab lock on queue.
               
if (workers[queuenum].queue.qlen == 0){ // Check if any packets are in the queue.
        workers[queuenum].queue.next = newpacket; // Queue next will point to the new packet.
        workers[queuenum].queue.prev = newpacket; // Queue prev will point to the new packet.
}
else{
        newpacket->prev = workers[queuenum].queue.prev; // Packet prev will point at the last packet in the queue.
        newpacket->prev->next = newpacket;
        workers[queuenum].queue.prev = newpacket; // Make this new packet the last packet in the queue.
}
workers[queuenum].queue.qlen += 1; // Need to increase the packet count in this queue.
pthread_cond_signal(&workers[queuenum].signal);
pthread_mutex_unlock(&workers[queuenum].queue.lock); // Lose lock on queue.

After looking at it again maybe reversing the order of these last two lines might help a little too but removing the lock if I can would be best.
Code:

pthread_cond_signal(&workers[queuenum].signal);
pthread_mutex_unlock(&workers[queuenum].queue.lock); // Lose lock on queue.

Here is the important section for the second thread. The full source is here. http://opennop.svn.sourceforge.net/v...h?&view=markup

Code:

/* Lets get the next packet from the queue. */
pthread_mutex_lock(&me->queue.lock);  // Grab lock on the queue.

if (me->queue.qlen == 0){ // If there is no work wait for some.

        pthread_cond_wait(&me->signal, &me->queue.lock);
}

if (me->queue.next != NULL){ // Make sure there is work.

        if (DEBUG_WORKER == TRUE){

                        sprintf(message, "Worker: Queue has %d packets!\n", me->queue.qlen);

                        logger(message);

        }
        thispacket = me->queue.next; // Get the next packet in the queue.
        me->queue.next = thispacket->next; // Move the next packet forward in the queue.
        me->queue.qlen -= 1; // Need to decrease the packet cound on this queue.
        thispacket->next = NULL;
        thispacket->prev = NULL;
}
pthread_mutex_unlock(&me->queue.lock);  // Lose lock on the queue.

Thanks!

JohnGraham 11-09-2010 06:02 PM

Quote:

Originally Posted by yaplej (Post 4154049)
I think this lock is causing a performance hit to my application.

Best check this out before you waste your time optimising something that's actually not that much of a performance hit.

One primitive way of telling if it is would be to use pthread_mutex_trylock() instead of the blocking ..._lock() call, put it in a simple loop, and see how many times you have to trylock() before you get it. If it's not very often, look elsewhere.

Or, of course, you could use a profiler...

wje_lq 11-09-2010 07:30 PM

Before spending too much time on the performance question, it might be good to focus on the original question: is it safe? Well, no. No, it isn't.

Even the act of incrementing or decrementing the number of items in the list isn't safe without a lock.

yaplej 11-09-2010 07:40 PM

That's what I though but wanted to make sure. There might be some improvement by changing the order of those two lines so a sleeping thread is not woken up while there is still a lock on the queue. The would only help if the thread were sleeping.

rupertwh 11-09-2010 08:26 PM

Quote:

Originally Posted by yaplej (Post 4154188)
There might be some improvement by changing the order of those two lines ...

No, that would be deadly. You must have the mutex locked while signaling the condition.

But what you do inside the respective locks might be optimized/reduced. From the excerpts you posted,
  1. qlen seems superfluous. Why would you care about the length of the queue?
  2. same with the prev links. You seem to be using it only as a pointer to the last packet anyway. So why not have just one 'last' pointer in queue?

So, not knowing the rest of your code, the following (untested!) code might be slighly more efficient:
Code:

/* add packet to the queue. */
pthread_mutex_lock(&workers[queuenum].queue.lock);
               
if (workers[queuenum].queue.next == NULL)
        workers[queuenum].queue.next = newpacket;
else
        workers[queuenum].queue.last->next = newpacket;

workers[queuenum].queue.last = newpacket;

pthread_cond_signal(&workers[queuenum].signal);
pthread_mutex_unlock(&workers[queuenum].queue.lock);



/* remove packet from queue. */
pthread_mutex_lock(&me->queue.lock); 

while (me->queue.next == NULL)
    pthread_cond_wait(&me->signal, &me->queue.lock);

thispacket = me->queue.next;
me->queue.next = thispacket->next;
if (me->queue.next == NULL)
    me->queue.last = NULL;

pthread_mutex_unlock(&me->queue.lock);


yaplej 11-09-2010 10:54 PM

Thanks for the ideas. Your right I don’t really care about the qlen or being able to traverse the list in reverse order so removing that might allow me to decrease the time the thread holds the lock. That part that probably takes the longest is incrementing/decrementing the qlen counter too so by removing that it might improve the speed quite a bit.

JohnGraham 11-10-2010 04:55 AM

Quote:

Originally Posted by yaplej (Post 4154336)
Thanks for the ideas. Your right I don’t really care about the qlen or being able to traverse the list in reverse order so removing that might allow me to decrease the time the thread holds the lock. That part that probably takes the longest is incrementing/decrementing the qlen counter too so by removing that it might improve the speed quite a bit.

It really sounds like you're optimising prematurely. It's nice to think that we can suss-out what bits of code are taking up the most time "by instinct" and know what to optimise, but in practice programmers (even the best ones) are generally quite bad at "feeling out" bottlenecks by instinct alone.

Use a profiler or some other method to see if there's actually a bottleneck where you think there is. Otherwise, you'll just be using up your precious time to deliberately obfuscate your code, and that would just be silly-silly-silly.

wje_lq 11-10-2010 09:10 AM

A more whimsical approach would be these optimization rules:
  1. Do not optimize.
  2. Do not optimize yet (experts only).

rupertwh 11-11-2010 06:18 AM

Quote:

Originally Posted by wje_lq (Post 4154803)
  1. Do not optimize.
  2. Do not optimize yet (experts only).

I wouldn't agree with that, but I suspect that is due rather to a difference in the definition of "optimize" than the question of what is good or bad coding practice. If by "optimize" you mean "neat" tricks, uglification or anything that makes the code harder to read, I'm with you.

Having said that, back to the original question: Yes, there is a (somewhat ugly) way to work on a linked list without locking. Does it make sense? Probably not. But it can be done.

Well, not completely without locking. The writer still has to acquire the mutex for every change, but the reader can safely remove packets from the queue without locking, if
  1. it's a simple singly linked list,
  2. no size-of-list counter has to be adjusted,
  3. there are at least two packets in the queue. Removing the last packet still requires a lock.
This requires reading/writing of pointers to be atomic. Which they probably are -- from the glibc manual:
Quote:

In practice, you can assume that int and other integer types no longer than int are atomic. You can also assume that pointer types are atomic; that is very convenient. Both of these are true on all of the machines that the GNU C library supports, and on all POSIX systems we know of.

But I would strongly recommend against doing that as
  1. it's unlikely to deliver a performance gain that would be the damn good reason you'd need to excuse such uglification
  2. having a size-counter (contrary to what I said in an earlier post) is probably much more valuable as it allows you to keep memory allocation in check


All times are GMT -5. The time now is 08:06 AM.