In my experience, tools like Nagios excel at being
operational monitors.
If you want to do more intensive analysis of log-file data, I suggest that the best way to approach the task is to do it as a true
statistics project. SO carried this interesting and detailed forum-post on
Logfile analysis in 'R'. Many more such articles await your search.
In many cases, I have been most successful by attaching
application-specific instrumentation to a process, usually arranging for it to write to a pipe that is quickly pumped by another process into a set of static files. For example, a workflow-management system might write "event" records at key points in the flow. First-stage analysis tools then assimilate these records into "wide" records that capture all of the salient data about each work-unit. Subsequent analysis is based on random samples taken from this dataset, and it is geared toward "testing some specific
hypothesis, or
objective." (For instance: "all class-B jobs should complete in less than four seconds, 95% of the time, and with a standard deviation of no more than 2." Pass/Fail: did this occur?)
Although "R" has modest data-capacity relative to some other tools, the fact that it is a true
programming language gives it powerful flexibility for such investigations.