create an error table? finding strings, and counting... in bash
I have a script that I wrote that searches an error log file for known errors, counts them, and then display statistics at the end. However it runs slow as molasses. I use grep and two loops to go through everything.
Here is an example of the file: Code:
04/02/08:20:16:57 - y:\logs: 04/02/08 20:16:57.300 - No valid sum Code:
okerror( Code:
OK Errors: Thanks, Eric |
I think a perl/python hash table based solution would probably be faster, but this might be fast enough. I got 35000 lines in 0.7 seconds (just your sample file duplicated). The only thing that annoys me is the need for a temp file, if only tee could send a copy to another process...
I assumed that the "-" is a delimiter, if it shows up in the error messages or the times/locations this won't work. Code:
#!/bin/sh |
WOW! This is super-slick.
Can you explain what this line does a little? Code:
cut -d- -f3 logfile | sort | uniq -c > counts cut's each line of the logfile at the third dash, then sorts it, and counts the unique instances of each line. Very nice. It helps to know about these gnu utilities. So much for the crap I wrote. Thanks! |
Quote:
You can always run each part of the pipeline separately to see what it does: Code:
~/tmp$ cut -d- -f3 logfile |
All times are GMT -5. The time now is 01:23 PM. |