Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I've seen and heard a number of people mention that you shouldn't be root when you do the ./configure and make steps when compiling source files. My question is, why suggest not being root during make?
I understand you need to be root for "make install" to place a new program in the system, but that's the only difference I see. If it's a security risk (overwriting existing files, a bad makefile that goes haywire, etc), you run the same risk when running "make install". I mean, a makefile can execute any command on the system regardless of the target being made. So if someone wanted to be malicious, they'd put the bad code in the install portion.
Am I missing something? It just seems like people are advocating system security, but then offering advice equivalent to "Hide the key to your front door in an unexpected place, but put the key to the backdoor under the welcome mat".
Yeah, I understand the idea that less time as root means less time to do something stupid.
I'm not casual when it comes to root access. I only use it when I think I need to. In this case, I become root to configure, make, and make install, and then exit root access; I have a clear focus of what I intend to accomplish. I've read all the documentation, etc. So to me, that seems reasonable. It just bugged me when I tried to install RCS on an LFS system. It griped that I shouldn't be root when I do a make, and then quit... not cool. Not cool at all.
Well, assuming you watch your make output, you could see any "oddities" and not follow up with a heart-pounding, gut-wrenching make install that runs your system 6 feet under.
If you don't watch the make output, or blindly:
Everything, then you are no greater risk IMHO. But you are at a MUCH greater risk than the population that uses non-priv's to:
./configure (checks the output)
make (checks the output, ensures nothing 'odd' is going on, possibly even runs a one-time check on the executable to make sure eveything is flavored right)
and priv to:
But that's only if you watch your stuff. Also, by ./configure and make ing as a non-priv you ensure everything that make and configure use are accessible by the user. If a user cannot read /bin (or /usr/bin) they will not be able to make because of the lack of GCC. If a user does not have an ability to use a particular lib, then configure will fail. It's good to know these things before you slam em in and go.
As a side note, yes, I do have a good bit of programming experience and I do monitor config and make output before issuing the install command.
I would find the library/support-executable a much more compelling reason than security. In fact, I will probably follow the non-root make for that reason alone. While it may not prove the new install will work, at least it's a minimal pre-requisite check.
Why is the security bit not reason alone enough? Since you follow the make output, you could surely see whether or not odd activity was being built, therefore you would not install that. And worse, if make was somehow contstrued to include a 'make install' and this 'odd' activity was about, if you were root you'd be installing it without the chance of the failsafes kicking in.
To me, security is a moot point between the "make" and "make install". Like I said earlier, "make install" can do anything and everything "make" does. If someone wanted to be naughty, they would simply put the bad code within the install target of the makefile. That is to say, the install target could recompile (quietly) any files compiled previously, or add in new ones.
The only gain in security is protecting your system from your own typing mistakes. And like I said, I've read the installation documentation beforehand. I know what I need to type to configure, compile, and install the software. So I'm not in danger of typing "rm -rf / *"... there's no command to install software that even remotely resembles that. In fact, I can't think of any typing mistakes that would transform "make" into a command causing catastrophic results.
Last edited by Dark_Helmet; 07-20-2003 at 01:35 AM.
Distribution: Slackware, (Non-Linux: Solaris 7,8,9; OSX; BeOS)
IMO, if you are actually that concerned about security (read malware or trojan horses), MasterC, then you need to be reading the Makefile and the source files before you even start to build the program.
If you are concerned about your mistyping, then you try not to log in as root. That's why you do a 'sudo make install' instead of becoming root. It's one command, you think about it before you type it, and typically you have to type in your password. As an added bonus, you don't ever really become root.
Occasionally a make uninstall might. If you've installed something to satisfy a dependency, and later remove it with a make uninstall, the satisfied dependency will also be removed causing *sometimes* unexplainable problems.
Oh, and the * after your rm -rf / is not necessary, -r removes the directories and all the files within it, since it's the root directory, that'll be everything.
And as a completely OT side note:
Your posts are 404
I personally use the non-priv to configure and make, and then su to root to make install. Most of my arguments here are simply for retort, discussion and "playing the devils advocate" or disagreeing simply to prove a point. I use non priv not for security, but rather out of habit from days when I knew nothing and did what I was told
Yeah, the asterisk is unnecessary... I'd seen that used as an example to a new user who wanted to stay logged in as root all the time. Or maybe it was "rm -rf . /"... don't remember because I wasn't that new user...
I've been noticing that more and more software for download has md5 sums or pgp/gpg signatures to download as well. That tells me the developers are becoming more concerned about preventing malicious code from getting into their downloads. That made me start thinking about security in general too, and that how "make" itself may have to be re-written to conform to a greater level of security. In the world of security, you're either trusted or not; there's no middle ground, and with make straddling the non-priviledged and priviledged domains, that presents a security concern.
I don't know how to bridge the gap. Perhaps it will boil down to "make" creating a list of commands for the administrator to issue rather than automating it; forcing the admin to manually install the software. It gets to that same old problem: ease of use or security... an increase in one almost always causes a decrease in the other.
I'm now 405... it seems like only yesterday I was a newbie... Man, the posts just seem to fly by now...
Last edited by Dark_Helmet; 07-20-2003 at 02:00 AM.
Distribution: Slackware, (Non-Linux: Solaris 7,8,9; OSX; BeOS)
Admins would quickly make a makemake program that runs the commands that make creates--I know I would. It's lazyness pure and simple, but I'd rather check md5 sums than do make's job..
I think it would be relatively easy, in design terms, not necessarily in programming terms, to make make more security conscious, but you'd also want to make gcc more security conscious, since make just sets up the compiler commands rather than compiles the code.
I think making a compiler more security aware would basically kill it, since everything you would check for can also be legit.
Basically, with open source software, you have the opportunity (some would say responsibility) to check the software for security issues before you compile it. If you don't check i, then you have already invested your trust that the code is good and checking for bad output from make isn't going to do you much good (some good, but not much).
My comments weren't meant to you alone, I was using the generic "you". Personally, I have a sandbox under my home directory in which I usually build code, then su - or sudo to make install.
I don't bother to check all the code to be sure it's secure, I trust that most OSS coders are good people. Since there ARE people who check the code and would catch the issues rather quickly, I feel I'm "safe enough" without having to audit every bit of code that I get.
Checking the output from make isn't going to really make me any safer, and the reason I don't log in as root is simply because I HAVE totally hosed systems because I forgot where/who I was.
Well, I'm not so sure about that (requiring gcc to be more security conscious). I'm not sure it's even possible. There's no way for you to tell the compiler "that code is bad" or "that code is good." Maybe some constraints could be placed on the options passed, but then that responsibility would fall to make. So make could have a configuration file specifying what commands it is allowed to use when compiling or installing, and a list of directories it is allowed to write to. It could also be written to allow operations only on files within the current source tree, and once those files leave the source tree, they're considered untouchable. I think that would go a great deal further to making it a trusted application.
Maybe if I get really bored one day, I'll consider looking at make's source to see if those changes would be feasible. If not, then maybe I'll at least rewrite it so whitespace is ignored in makefiles... I absolutely loathe that tabs are required... One battle at a time I guess...
Distribution: Slackware, (Non-Linux: Solaris 7,8,9; OSX; BeOS)
My point was that to really be secure requires human intelligence and diligence. Making make a more trusted application still isn't going to get people to go through the code they download and make sure there isn't a trojan horse hiding in their kernel.
How many people actually check the md5 sum on their kernel when they download a new version? How many "RPMs" come with checksums? How many people use them? Now, the checksum example of human lazyness is bad because you can easily rewrite make/rpm/autoconf/pkgtool/whatever to check the md5sum before doing anything else, and if that fails it will refuse to go any further.
My point still stands: making tools more "secure" isn't going to make systems more secure when the admin of that system doesn't know or care about how to make their system secure.
Another issue I worry about with making make more trusted is loss of functionality. As soon as you restrict it to operations on a certain source tree, do you also limit its ability to link against an existing library elsewhere on the filesystem? If you don't do that, then what would stop me from writing code that "requires" a library that I also wrote and contains the malware?
Yeah, at some point human thought and decision has to determine whether something is trusted or not. So yes, when you download some code, even if it comes with an md5sum, you're saying "I trust this code and this developer." The md5sum is there only as a confirmation that what you received did indeed come from the developer. So, the library you mentioned was already trusted because it was installed... it just so happens the trust was misplaced in that case.
Security is one of those messy topics. Some believe that by forcing security on people (ala forced md5sum checks), you increase the aggregate security of the computing community overall. However, that flies in the face of the do-it-yourself-the-way-you-want philosophy of the linux/open source community. It's born of the same rationale that legislators use when passing "it's good for you" laws (think "sin tax" or prohibition). It's not my intent to start a discussion on those topics, but simply to point out that those mindsets are out there, and are already in motion regarding computer security at all levels.
For instance, the beginning of this thread was about the RCS installation quitting because I was trying to run make as root. Someone else's idea of security was being forced on me. So maybe next week someone will release a more secure make in an attempt to force people to be more secure.
Reading over my own post, I don't think I made the connection as clear as I would like to have made. If what you say is correct about laziness (and I would tend to agree), you have two choices: accept it and get on with life, or look for ways to minimize its effect. I'm just trying to say that the computing world may get nudged toward greater security whether it wants to or not, because some developers have chosen to try an minimize the lazy-effect.
Last edited by Dark_Helmet; 07-20-2003 at 03:11 AM.