Quote:
Originally Posted by naruponk
Any advice ?
|
First of all, and in addition to advice offered already, you should install the latest release of the distribution which in your case is Centos 5.5 and keep that installation up to date. Secondly you should understand that
securing your server and auditing security is a continuous process. So running tools just once, expecting tools to do all the work for you and relying on any single particular tool simply is the wrong approach. What's more important is that you should understand that
running tools is not a substitute for conceptual and practical knowledge. Furthermore the security posture of your machine, setting attributes and auditing events
all are in essence binary: on or off, yes or no. There is no "if", "maybe" or "thinking": something is secure or not, something is vulnerable or not, integrity is maintained or not. So replies in response to a security-related question from people who just tell you "not to worry" (without them having been shown evidence or having given you things to check) you may dismiss without giving it any further thought. Finally, and related to the previous point, you should understand that
what you don't know about you can't respond to. Most attacks are preceded by reconnaissance and, when configured properly, access details and errors will show up in logs. People often forget that checking logs regularly may help you adjust access controls (automagically) or avoid or contain a breach of security.
So. With the listing of software installed at hand your first port of call should be reading your distributions documentation with respect to distribution, kernel, subsystems, services and other software settings related to user management, component configuration and access controls. To see what I mean ask yourself for instance:
- if the MAC-layer RHEL / Centos provide (SE Linux) needs to be disabled?
- which software is not required for (headless?) server operation?
- if you really require this application instead of a more secure, better performing or less features having alternative?
- if you really require all these applications to run on one machine instead of spreading them out over machines for security and performance reasons?
- which system accounts need to be enabled and do they require a shell?
- which human users need accounts on the system (as opposed to virtual user accounts)?
- who has access to services and how would you check that?
- if you should bring a developed product from the development area to the production machine without sanity checks, performance tests and guarantees?
- how you would be informed of changes in the systems state, service and performance behaviour, unauthorized traffic and access?
- what you need to do to verify integrity of the file system and trust results to be unambiguous?
- what you would do if you suspect a (D)DoS?
- what you would do if you suspect a breach of security?
- how you ensure that what you backup is what you need to restore state?
Practically speaking the initial security posture of your machine starts before installing the O.S. by determining what the purpose of the machine will be (what services it should support), where in the network it will reside (behind a router or not, in a DMZ or not) and who may access services (say access restrictions on /admin paths) and in what way (behind an IDS, behind an application firewall, behind a reverse proxy, et cetera). (And if you are or will be providing services in a professional way then you should give thought to procedures like using staging machines and do contingency planning as well.) Security is about you being in and retaining control throughout the service life of the machine: from the base installation phase on access to the machine should be restricted, no services should be exposed until secured and logging should be enabled to ensure access controls function properly. When the initial installation is finished create a baseline file system backup, create a file system integrity checker baseline database, store backups off site, choose a configuration method (anything that lets you track system changes and restore versions of configuration files) and then harden the initial setup. One tool to use locally during installation phase checking is GNU/Tiger. Even with the bare minimum of configuration it can provide you with quick wins in terms of things to check. Another indispensable tool is Logwatch which summarizes system and service logs so you can investigate issues and adjust controls. (Check out the
LQ FAQ: Security references post #1?) With the base installation now configured properly and hardened, your backup, update and auditing processes in place, services can be added
in a way you can control and audit and providing them in a reliable and more secure way only makes sense
now.
Without being able to vouch for the integrity of the system all of the time checking the web stack makes no sense. And similar to the base installation web stack components come with extensive documentation. Plus there's tons and tons of docs and checklists (CERT, SANS, CIS, PCI-DSS) out there to check your components
well before running any tools. Please see the (partial) overview at
LQ FAQ: Security references post #6: "Securing networked services" which covers web server, database, PHP and tools. Note tools may have different scopes and purposes ranging from single purpose
network port scanners like nmap or hping2 to
locally run vulnerability checkers like GNU/Tiger or LSAT to web application vulnerability scanners like w3af, webscarab, wapiti or websecurify to
remotely run vulnerability checkers like Nessus or OpenVAS, Sara, Saint, to complete
auditing and penetration frameworks like the Metasploit Project. But like I said before you should know what you need to test for:
running tools is not a substitute for conceptual and practical knowledge.
HTH