LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   OpenSource Log Analyzers ? Suggestions? (https://www.linuxquestions.org/questions/linux-general-1/opensource-log-analyzers-suggestions-4175619835/)

ranmanh 12-18-2017 01:25 PM

OpenSource Log Analyzers ? Suggestions?
 
Hi

I got a number of Linux Servers with syslog-ng on sending all the logs to an rsyslog server where all the log data is being stored.

Now I am looking into some sort of scalable OpenSource solution which can capture all that data for analysis.

For Alerting purposes I have thought on Sensu as it's scalable and I got some previous experience with it, but I am still not sure about the middle man.

I have found graylog, but not sure whether that would be a good solution or not.
Note: I do not want to touch the rsyslog server but just maybe to setup an agent which will be sending the logs to the log-analyzer server

Base on your experience, any recommendations I can follow on my research?

Many thanks

frankbell 12-18-2017 08:09 PM

Nagios comes to mind. Log analysis is one of the services it offers.

A web search for open source log analyzers turns up a number of articles.

Habitual 12-19-2017 10:40 AM

ELK.

ranmanh 01-04-2018 02:16 AM

Hi Guys,

Very much appreciated for your suggestions.

I had look into everything and spent some "quality time" around those + others.

In the end I have decided to go fo Graylog2.
Main Reasons:

* I need to filter + have some alerting . Graylog2 got both.
* As habitual suggested: ELK. Graylog2 got the main ELK components + the alerting and I do not have to deal with licensing in terms of security + other features. Nevertheless ElasticSearch is running underneath, so same principle.
* After a bit of a fight, I got GROK splitting the syslog files into meaningful chunks so I can filter and alter up the the string. (It wasn't too hard there...)


I am with frankbell on Nagios. That's the most common and probably pretty down the ground one. Even using Graylog2, I am thinking on how to also get Nagios to complete all those missing parts of information I still need.

Many thanks to everyone and I wanted to get back to you with the outcome of my research and all your suggestions.

sundialsvcs 01-04-2018 10:22 AM

In my experience, tools like Nagios excel at being operational monitors.

If you want to do more intensive analysis of log-file data, I suggest that the best way to approach the task is to do it as a true statistics project. SO carried this interesting and detailed forum-post on Logfile analysis in 'R'. Many more such articles await your search.

In many cases, I have been most successful by attaching application-specific instrumentation to a process, usually arranging for it to write to a pipe that is quickly pumped by another process into a set of static files. For example, a workflow-management system might write "event" records at key points in the flow. First-stage analysis tools then assimilate these records into "wide" records that capture all of the salient data about each work-unit. Subsequent analysis is based on random samples taken from this dataset, and it is geared toward "testing some specific hypothesis, or objective." (For instance: "all class-B jobs should complete in less than four seconds, 95% of the time, and with a standard deviation of no more than 2." Pass/Fail: did this occur?)

Although "R" has modest data-capacity relative to some other tools, the fact that it is a true programming language gives it powerful flexibility for such investigations.


All times are GMT -5. The time now is 01:45 AM.