Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
i have a question, more of a management question of software.
i have been tasked to write a harden document for Apache. my doc includes how-to install Apache. install will also include mod_security2 for waf ability.
using a minimum hardened install of the OS i chose to install items from src instead of using yum repo from RH. i found that the repo's seem to have software that is one or more rev's behind the latest stable from the actual sofware vendors (apache, pcre, apr, apr-util, etc etc).
so, that said, updating/patching any of these compiled-from-src items now becomes a different task, cant simply use yum update [package], etc.
there are approx 50 Apache servers.
so, would you rely on older versions of software via yum repo, or compile the latest stable stuff? how hard is it to patch/update packages that come into the system from compile method?
It is an important point - you need the security fixes, you might, in general, want other stuff. Of course, there are probably some other people from some other department who feel that they need the other stuff. In an ideal world, the mess of paperwork that is a quality system would stop these people from committing to the use of stuff that creates quality headaches for other people. But it probably doesn't.
Quote:
Originally Posted by Linux_Kidd
so, that said, updating/patching any of these compiled-from-src items now becomes a different task, cant simply use yum update [package], etc.
Wellll, not quite. It must be possible to build an RPM package yourself, and define your own repo to keep it in. Quite whether it is worth attacking the problem in this way is quite a different question, but with several machines that has to be a possibility that is considered.
It strikes me that if this is an organised Quality System, the questions that are likely to cause you grief are:
If you build yourself, what procedure is there that ensures that you always know quickly when there is something new to build (you are going around the normal notification system, so that's a point that your system has to address)?
If you build yourself, people are likely to poke into whether your build has introduced any bugs, so will get worried about suppression of error messages, whether the right options have been built, is the documentation good enough for anyone other than you to follow it perfectly, etc, etc. It should be possible to cover this with documentation (even though that's more documentation than you wanted to create).
If you just take RH's build of packages, do you know stuff like how rapidly they get security fixes to you (there is a delay through RH and a delay while you do a test install and a delay while you roll out) and do you know that everything that needs fixing gets fixed?
I always feel that you are entitled to go for the system that presents you with the lowest integrated level of pain, but when you are assessing the level of pain that any particular set of options presents to you that you need to be careful that you are not overlooking any particular and acute causes of said pain (eg, whether pain is caused by Quality System audits or heading off exploits, it is still pain) and to do that you need to anticipate where the unexpected pains will come from. That's the clever part - getting all the sources of pain and the relative magnitudes of the pains.
My guess is that going with the Red Hat builds is probably the lowest pain route (different orgs will come up with different answers on that one, because they'll give the web devs the freedom to require the whizz bang features); after all, you are paying for RH to take care of this. But, if you are going to choose to look after it yourself, make sure that there isn't some hidden feature of the system that you adopt that will cause you pain forever.
"Hardening Apache" refers to more than protecting the software itself or making sure that the software is up to date. It has much more to do with making sure that Apache always does exactly the right thing ... and that content which Apache serves cannot be tampered with. To that end, you can't assume that someone who is "tampering with Apache" is coming in through Apache, because the intruder probably isn't.
(An amazing number of systems whose owners should know better have unprotected FTP or have files owned by an "ftpusers" group. Or store sensitive information on public shared-servers because they're cheap. Or, they have anything whatsoever to do with "convenient" management tools like Plesk, giving those tools nearly super-human powers again "for convenience.")
You need to start by identifying what are the attack-vectors that you must guard against; then, for each, how you intend to guard against them. You have to "know your enemy."
There are really only a few ways that a system can be compromised, and most of those rely on lazy owners. If you, so to speak, simply lock your front doors and the windows next to them, that "pizza delivery man" will carry his "pizza box" to the next house.
Last edited by sundialsvcs; 05-09-2012 at 07:41 AM.
"Hardening Apache" refers to more than protecting the software itself or making sure that the software is up to date. It has much more to do with making sure that Apache always does exactly the right thing ... and that content which Apache serves cannot be tampered with. To that end, you can't assume that someone who is "tampering with Apache" is coming in through Apache, because the intruder probably isn't.
(An amazing number of systems whose owners should know better have unprotected FTP or have files owned by an "ftpusers" group. Or store sensitive information on public shared-servers because they're cheap. Or, they have anything whatsoever to do with "convenient" management tools like Plesk, giving those tools nearly super-human powers again "for convenience.")
You need to start by identifying what are the attack-vectors that you must guard against; then, for each, how you intend to guard against them. You have to "know your enemy."
There are really only a few ways that a system can be compromised, and most of those rely on lazy owners. If you, so to speak, simply lock your front doors and the windows next to them, that "pizza delivery man" will carry his "pizza box" to the next house.
this Apache sits atop of a hardened OS which has been supplied by me.
this question really revolves around managing a compiled Apache vs using yum repo while weighing the pros-cons of each.
i believe compiles version gives best control of the app itself, buy doing a "yum --security httpd" is certainly much better then having to ./confgure some src for a new .so file, etc.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.