The simple answer is "Yes, it can be done".
But that doesn't mean it is necessarily easy to do.
Part of the problem is that messages can come in FAST... and the data has to be read and stored at least as fast as it comes in. This introduces a problem with the database - which doesn't do well with bursts of high traffic input.
I would suggest first start trying process records from rsyslog written to disk, then add them to the database. This allows you to use the disk file as LARGE cache of input that will catch the data. If you use a method similar to "tail -f", you will be able to read and process the data as fast as you can (and even catch up to rsyslog production) without loosing the data.
When this works, you can even add the ability to periodically direct rsyslog to start a new file, the data base process can then finish the current one, start processing the new one. And the old file could be either archived (at first), and then deleted, thus making the used disk space available for for messages.
If you can ALWAYS keep up with the data (even to the point of implementing your own caching file if necessary) you can modify the rsyslog configuration to pass the data to your process instead of writing it to disk...
Last edited by jpollard; 05-31-2016 at 06:46 AM.