Ah, OK. I'd say implement preventive measures first. Ensuring you keep everything up to date and only expose what's absolutely necessary, and this goes for the hard outside as well as the chewy inside, minimizes your attack surface (see your distributions security documentation, GNU/Tiger, OVAL tools, msec, debscan, etc, etc). Having baseline data, meaning package management or integrity verification configuration, binaries and database regularly verified and backed up to a known safe remote location, ensures you have a sound basis to , ahh, base audits on (AIDE, Samhain, md5deep, ausearch / aureport). Having proper access controls (audit rules, password strength and aging, PAM, firewall, SSH pubkey auth, Sudo, Rootsh, fail2ban, mod_security, reverse proxying, Snort or another IDS, whatever else service-specific software, etc, etc) will, together with any active auditing in place generate enough log entries to be informed in advance (Logwatch, SEC, OSSEC, petit, etc, etc). Finally actually
testing the outside surface (nmap, OpenVAS, etc, etc) ensures that you can
verify your measures make sense / work.
I'm sorry if you've heard or read all of that before. But tools are just tools and reporting is just reporting. Sometimes things are obvious and sometimes it's just experience correlating data that gives you a hunch or shows a lead. If you would like to see the other side of it all I invite you to search the Linux Security forum for compromise threads. Determine for yourself what you would have to do to solve a case like say
this,
this,
this,
this or
this.