I was wondering.
]# pkill -9 -f "cat.*curiosity" ;-p
People that host websites surely do not audit all the code that they host.
Since even the maintainers of PHP or PHP-based apps don't (in full) we can't expect hosters to either. Hosters usually don't have the right toolkit or programming knowledge or sense of security or time (==money). Hell, some don't even read README's.
But what if they host some poorly written websites that leave backdoors open? How do they ensure that the whole server isn't compromised because of one poorly written website.
0. packet-scrubbing network device that is capable of detecting anomalies and scans (Snort, Prelude, HW),
1. reverse proxy scrubbing requests (Apache),
2. sensor (or mod-security) or Iptables ruleset tripping over repeated bad requests,
3. real-time integrity checking (Samhain)
4. real-time logalerting,
If a cracker gains access through one website, what's stopping them from compromising the whole computer?
5. virtualisation (Xen, Qemu, UML and the Other One)
6. running services as lesser-privileged users,
7. chrooting running services,
8. strict role separation (SELinux, GRSecurity's RBAC, RSBAC)
9. keeping a small footprint and keeping up to date with security fixes,
10. properly configuring about everything,
11. proper access restrictions
12. Running a minimal Apache, Hardened-PHP, secured MySQL,
??. sure I forgot something, you name it:
Last edited by unSpawn; 08-22-2006 at 10:54 AM.
Reason: signal.h
|