LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 04-16-2010, 07:10 PM   #1
yaplej
Member
 
Registered: Apr 2009
Distribution: CentOS, Ubuntu, openSuSE
Posts: 165
Blog Entries: 1

Rep: Reputation: 22
C Improve function efficiency


Hi all,

I have this function that executes pretty much forever until the application is shutdown. Once it starts running the CPU usage goes through the roof. Is there any way to help prevent this? It basically runs waiting for work to do. Work will be assigned to me->queue, and the qlen increased. When this sees qlen is greater than 0 it will pickup the next piece of data to process and process it.

It works great except the high CPU usage.

Code:
void *packetsqueezer_function (void *dummyPtr)
{
	struct worker *me;
	struct packet *thispacket;
	struct iphdr *iph;
	me = dummyPtr;
	thispacket = NULL; // Initialize pointer for the queued packet.
	iph = NULL; // Initialize pointer for the IP header.
	
	while (me->state >=0) {
		
		/* Lets get the next packet from the queue. */
		pthread_mutex_lock(&me->queue.lock);  // Grab lock on the queue.
		if (me->queue.qlen > 0){ // Only get a packet if there is one.
			printf("Queue has %d packets!\n", me->queue.qlen);
			thispacket = me->queue.next; // Get the next packet in the queue.
			me->queue.next = thispacket->next; // Move the next packet forward in the queue.
			me->queue.qlen -= 1; // Need to decrease the packet cound on this queue.
			thispacket->next = NULL;
			thispacket->prev = NULL;
		}
		pthread_mutex_unlock(&me->queue.lock);  // Lose lock on the queue.
		
		if (thispacket != NULL){ // If a packet was taken from the queue.
			iph = (struct iphdr *)thispacket->data;
			printf("ip_length=%u\n",ntohs(iph->tot_len));
			nfq_set_verdict(thispacket->hq, thispacket->id, NF_ACCEPT, ntohs(iph->tot_len), thispacket->data);
			free(thispacket);
			thispacket = NULL;
			
		}
		//sleep(1); 
	}
	return 0;
}
 
Old 04-16-2010, 07:35 PM   #2
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
The obvious crude solution is to put the sleep(1) back in.
If you do that, you should at least make it
else sleep(1);
rather than just sleep(1); so it can't make work back up.

The sleep does an imperfect job of eliminating the wasted CPU time and introduces some latency that might be annoying. So it might be worth the effort to solve the problem "correctly". But it does solve most of the problem with no significant programming effort, so maybe that is preferable to solving it "correctly".

I don't happen to recall enough about programming things you can wait for to give you detailed advice on how to code it "correctly". Also, that probably requires looking at the source side of queue in order to know what changes are needed there. The crude solution can be done within just the code you quoted.

Last edited by johnsfine; 04-16-2010 at 07:38 PM.
 
Old 04-16-2010, 09:01 PM   #3
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by johnsfine View Post
The obvious crude solution is to put the sleep(1) back in.
If you do that, you should at least make it
else sleep(1);
rather than just sleep(1); so it can't make work back up.

The sleep does an imperfect job of eliminating the wasted CPU time and introduces some latency that might be annoying. So it might be worth the effort to solve the problem "correctly". But it does solve most of the problem with no significant programming effort, so maybe that is preferable to solving it "correctly".

I don't happen to recall enough about programming things you can wait for to give you detailed advice on how to code it "correctly". Also, that probably requires looking at the source side of queue in order to know what changes are needed there. The crude solution can be done within just the code you quoted.
Maybe

man 2 sched_yield

?
 
Old 04-16-2010, 10:04 PM   #4
yaplej
Member
 
Registered: Apr 2009
Distribution: CentOS, Ubuntu, openSuSE
Posts: 165

Original Poster
Blog Entries: 1

Rep: Reputation: 22
I am trying to figure out why this line of code does not cause high CPU usage.

while ((rv = recv(fd, buf, sizeof(buf), 0)) && rv >= 0)
https://git.netfilter.org/cgi-bin/gi...b;hb=HEAD#l118

Last edited by yaplej; 04-16-2010 at 10:58 PM.
 
Old 04-17-2010, 07:03 AM   #5
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by yaplej View Post
I am trying to figure out why this line of code does not cause high CPU usage.

while ((rv = recv(fd, buf, sizeof(buf), 0)) && rv >= 0)
https://git.netfilter.org/cgi-bin/gi...b;hb=HEAD#l118
I didn't add the security exception needed to follow your link. Just looking at the snip of code you quoted.

I assume fd is the end of a pipe. So the question is what does the recv call do when the pipe is empty:
Return 0, causing the code you quoted to consume lots of CPU time?
Stall inside recv causing it not to consume excess CPU time.
Return an error code (negative) so whether excess CPU time is consumed depends on what is done outside the quoted while.

I've used pipes in too many systems in too many ways to remember now which is the typical Linux behavior and/or what choices in creating or opening a pipe determine the answer to the above question. You should be able to look up documentation for whatever functions you used to create and open the pipe in order to get that answer.
 
Old 04-17-2010, 04:49 PM   #6
yaplej
Member
 
Registered: Apr 2009
Distribution: CentOS, Ubuntu, openSuSE
Posts: 165

Original Poster
Blog Entries: 1

Rep: Reputation: 22
I think Ill add another if (me->queue.qlen > 0){ statement before the muxtex lock with an else{ sched_yield()} statement. That should slow it down when the queue is empty. I dont know how efficient it is, but its got to be better than what its doing now.
 
Old 04-17-2010, 06:34 PM   #7
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
For this purpose, sched_yield will make barely any difference.

Is sleep(1) really too much latency?
 
Old 04-17-2010, 08:39 PM   #8
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,784

Rep: Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083
Quote:
I don't happen to recall enough about programming things you can wait for to give you detailed advice on how to code it "correctly".
The "correct" way would be to wait on a condition variable that the source side would signal on when adding an item.

Waiting and Signaling on Condition Variables
 
Old 04-17-2010, 10:39 PM   #9
yaplej
Member
 
Registered: Apr 2009
Distribution: CentOS, Ubuntu, openSuSE
Posts: 165

Original Poster
Blog Entries: 1

Rep: Reputation: 22
That is exactly what I am looking for. Ill post back how it works.
 
Old 04-18-2010, 07:02 PM   #10
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
Quote:
Originally Posted by ntubski View Post
The "correct" way would be to wait on a condition variable that the source side would signal on when adding an item.

Waiting and Signaling on Condition Variables
True for multiple threads.

For multiple process you are going to have to use a signal. Man signal.

In the case of multiple processes, the process originating the work would have to signal the process that handles the work, when there is work to be done.

edit:

Never mind. I reread the first post and saw that the work will appear at a location that is internal to the application. Hence multi threads and posix signaling is definitely the way to go.

Last edited by jiml8; 04-18-2010 at 07:03 PM.
 
Old 04-19-2010, 12:43 PM   #11
yaplej
Member
 
Registered: Apr 2009
Distribution: CentOS, Ubuntu, openSuSE
Posts: 165

Original Poster
Blog Entries: 1

Rep: Reputation: 22
Once I added waiting in the thread when there is no work it fixed the high CPU usage issue. Everything runs much smoother now.

Thanks everyone.


edit*

Here is the code if anyone wanted to see it.

Code:
void *worker_function (void *dummyPtr)
{
	struct worker *me;
	struct packet *thispacket;
	struct iphdr *iph;
	me = dummyPtr;
	thispacket = NULL; // Initialize pointer for the queued packet.
	iph = NULL; // Initialize pointer for the IP header.
	
	while (me->state >=0) {
		
		/* Lets get the next packet from the queue. */
		pthread_mutex_lock(&me->queue.lock);  // Grab lock on the queue.
		if (me->queue.qlen == 0){ // If there is no work wait for some.
			pthread_cond_wait(&me->signal, &me->queue.lock);
		}
		
		if (me->queue.next != NULL){ // Make sure there is work.
			printf("Queue has %d packets!\n", me->queue.qlen);
			thispacket = me->queue.next; // Get the next packet in the queue.
			me->queue.next = thispacket->next; // Move the next packet forward in the queue.
			me->queue.qlen -= 1; // Need to decrease the packet cound on this queue.
			thispacket->next = NULL;
			thispacket->prev = NULL;
		}
		
		pthread_mutex_unlock(&me->queue.lock);  // Lose lock on the queue.
		
		if (thispacket != NULL){ // If a packet was taken from the queue.
			iph = (struct iphdr *)thispacket->data;
			printf("ip_length=%u\n",ntohs(iph->tot_len));
			nfq_set_verdict(thispacket->hq, thispacket->id, NF_ACCEPT, ntohs(iph->tot_len), thispacket->data);
			free(thispacket);
			thispacket = NULL;
			
		}
	}
	return 0;
}

Last edited by yaplej; 04-19-2010 at 01:26 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
NFS efficiency sjreilly Linux - General 3 03-03-2010 08:48 AM
Why does C Standard I/O library not to improve its efficiency. cryincold Programming 4 09-07-2007 04:39 PM
LXer: Improve test efficiency with Rational tester eKit LXer Syndicated Linux News 0 07-19-2006 07:33 AM
Maximum Rar Efficiency Nuvious Linux - General 1 05-21-2006 07:28 PM
Memory Efficiency daveofnf Linux - Software 2 07-05-2003 03:22 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 12:26 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration