ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a zabbix monitoring server and I am executing the following line each minute on each client
Code:
wc -l /proc/net/ip_conntrack
This is a piece of cake on most servers, but on 1 server it will take about 13 seconds as it has more about 40.000 (ligit) connections open.
I just want to count the lines to get the amount of connections. "wc -l" does more than I want it to and I'm hoping a program that just counts the lines and only does that, can do this in a faster way.
I am not good with C. I used it 20 years ago to manipulate strings when things became too slow in the native language (Clipper). I did write some nice things in C at the time, so it wonders me somehow why I can't do it (maybe I should try harder).
I tried to alter this program which does a bit more than just count the line, but didn't succeed (how embarassing).
Could someone take a look at it?
I was thinking of calling it 'lc' and it should only return the amount of '\n'
Hopefully the binary is faster than 13 seconds...
Well I am not sure of the speed comparison, but how does something like this compare:
Code:
grep -c . /proc/net/ip_conntrack
The upside is blank lines will be skipped, the downside is a line with only white space will be counted. There are ways to counter this but of course they may slow it down.
I did manage to alter that C-program and was able to compile an 'lc', but was disappointed to see it perform this bad. I also benchmarked the 'grep' and it performed the same as wc -l.
Here's my patched 'wc' http://pastebin.com/KUM0EwnN
I compiled it with 'gcc lc.c -o lc'
Code:
# time ./lc /proc/net/ip_conntrack
38504 /proc/net/ip_conntrack
real 0m41.469s
user 0m0.152s
sys 0m38.682s
# time wc -l /proc/net/ip_conntrack
38059 /proc/net/ip_conntrack
real 0m10.162s
user 0m0.008s
sys 0m9.889s
# time grep -c . /proc/net/ip_conntrack
38192
real 0m10.115s
user 0m0.016s
sys 0m9.925s
# time ./lc /proc/net/ip_conntrack
38504 /proc/net/ip_conntrack
real 0m41.469s
user 0m0.152ssys 0m38.682s
# time wc -l /proc/net/ip_conntrack
38059 /proc/net/ip_conntrack
real 0m10.162s
user 0m0.008ssys 0m9.889s
wc has been developed over years by clever people.
all the bugs have been killed years ago
I'm not suggesting at all that 'wc' has bugs or written inefficient.
wc can do much more than just count lines and I assume it has some code in it that is able to count words instead of lines and it may execute a bit of code to test something that never changes.
For someone using C on a daily basis it should be a piece of cake to rewrite 'lc'
BTW, the "wc" which I patched to make "lc" is not the one which is widely used. Maybe I should get hold of that one and try to modify that (take code out) to speed it up.
I think the key is into taking a big chunk of data each time you call the library function (getc,fread) and put that in a piece of static memory and count the amount of '\n'.
About improving code....
I can remember (20 years ago) speeding up a soundex() function for Clipper. They gave an example in assembly. I used my own algorithm for it and did it in C. Mine was 1000 times faster.
"cat" is super fast and awk will receive just one line to output only the number....anyway, is just a crazy idea to test.
Last edited by marozsas; 01-25-2011 at 10:34 AM.
Reason: I removed my test with /var/log/messages because I borked on copy-paste. sorry for that. never mind...
I'm not suggesting at all that 'wc' has bugs or written inefficient.
wc can do much more than just count lines and I assume it has some code in it that is able to count words instead of lines and it may execute a bit of code to test something that never changes.
I just download coreutils and took a look at the source of 'wc'.
They already have a seperate loop for just counting lines, so I don't think it can be optimized that easily.
Maybe someone can still see some posibilities?
Can't it use a static piece of memory (buffer) which is then parsed and counted?
memchr is a library function, should a library function be used per se?
Code:
/* Use a separate loop when counting only lines or lines and bytes --
but not chars or words. */
while ((bytes_read = safe_read (fd, buf, BUFFER_SIZE)) > 0)
{
char *p = buf;
if (bytes_read == SAFE_READ_ERROR)
{
error (0, errno, "%s", file);
ok = false;
break;
}
while ((p = memchr (p, '\n', (buf + bytes_read) - p)))
{
++p;
++lines;
}
bytes += bytes_read;
}
"cat" is super fast and awk will receive just one line to output only the number....anyway, is just a crazy idea to test.
I tested it, but "wc -l" is much faster...
Code:
# time cat -n /test.pl | tail -n1
1933524 14467 addresses are on the whitelist
real 0m0.318s
user 0m0.240s
sys 0m0.040s
# time wc -l /test.pl
1933523 /test.pl
real 0m0.093s
user 0m0.076s
sys 0m0.016s
I think you're right.
But isn't there also room for speed improvement by just parsing the buffer in plain C (without calling a function in the C-library)?
I think you're right.
But isn't there also room for speed improvement by just parsing the buffer in plain C (without calling a function in the C-library)?
If parsing time is 0.152s that's the room for speed improvement...
# time /usr/bin/wc -l /1.3GB.txt
32128160 /1.3GB.txt
real 0m49.020s
user 0m0.588s
sys 0m4.624s
I downloaded coreutils and compiled that wc, it turned out to be slightly faster than the one that came with Ubuntu 10.4LTS. I don't know why. It may even be due to the difference in version.
# gcc lc.c -o lc -O3
# time ./lc /1.3GB.txt
32128160 /1.3GB.txt
real 0m47.802s
user 0m15.565s
sys 0m2.012s
# time ./lc /1.3GB.txt
32128160 /1.3GB.txt
real 0m45.938s
user 0m15.649s
sys 0m2.008s
(your) lc compiled with default options
Code:
# gcc lc.c -o lc
# time ./lc /1.3GB.txt
32128160 /1.3GB.txt
real 0m54.932s
user 0m19.857s
sys 0m1.864s
But you're using the library function 'getc' which is possibly a slow library function (according to that webpage). It may of course be a problem with the implementation on his machine....
In the old days when I was still writing in C for fast functions (in comparison with native Clipper) I never used any library functions. afaik this was not possible. There were some functions meant for parameter passing and allocating memory. I then parsed these buffers in native C. I posted these functions in public domain, but this was before Internet became popular. They were uploaded to my brother's BBS which was part of fidonet. I couldn't find any of my sources on the Internet...
Don't you think it's worthwile to change the source of coreutils wc.c instead of the other one which uses getc? wc.c uses another library function (memchr). This isn't needed is it? Or doesn't it give you a speed improvement?
Code:
/* Use a separate loop when counting only lines or lines and bytes --
but not chars or words. */
while ((bytes_read = safe_read (fd, buf, BUFFER_SIZE)) > 0)
{
char *p = buf;
if (bytes_read == SAFE_READ_ERROR)
{
error (0, errno, "%s", file);
ok = false;
break;
}
while ((p = memchr (p, '\n', (buf + bytes_read) - p)))
{
++p;
++lines;
}
bytes += bytes_read;
}
}
PS I googled my name in combination with Clipper and did find these files (how funny)
http://members.fortunecity.com/userg...tml/summer.htm
Trudf.zip
Set of MS C source UDFs (w/OBJs) that total numeric elements of an array, test null expressions, pad character strings, SOUNDEX(), & strip non-alphanumeric characters from strings - by J van Melis
That's more than 20 years ago.
I wish I could get hold of that file....
One idea is, use the size of the file 'stat -c %s' as long as the file has a constant number of characters per line ... other than that you cannot get any faster than wc -l.
One idea is, use the size of the file 'stat -c %s' as long as the file has a constant number of characters per line ... other than that you cannot get any faster than wc -l.
You can't use that trick for /proc/sys/net/ip_conntrack
It's only a pseudo file and 'stat -c %s' returns 0 (as I found out a while ago in another situation)
But I already made progress by modifying the buffersize of "wc.c" (didn't you see the results I posted?)
I'm currently in the process of obtaining my 25 year old sources in C.
These sources don't contain calls to library functions.
Hopefully things will return and I may even pick up programming in C again.
I still think/hope there's some room for improvement.
I will keep you posted (also if I don't succeed)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.