What does it mean to install from source and not binary?
What I understand is that binary is what most distributions, including Ubuntu, use and can install from their package manager.
Source is just for developers, people that want to roll their own.
Is all this correct?
That is what I understand too.
From source you need to download a file like a .tar then you have to uncompressed it and have your compiler do the work.
binary is when you have a package manager software like synaptic or YasT or many more depend on your distro or OS (BSD uses something like pkg_add in terminal) to install a precompiled program.
Good luck to you!
If you program and have the source, you can make code or default configuration changes to tune software packages to your system use rather than depending upon the defaults form the maintainer.
Some people compile programs simply to have the version and configuration they need to support operations: when that is not supported by the maintainer. No actual programming or development involved, but they do compile the packages for their systems.
Your package manager can generally download the source (rather than or in addition to) the binary package. In this case you get the same version as if you had installed the binary, but ready for you to configure the compile and compile for yourself. If that is what you need, the tar.gz or tar.bz2 file is not required.
Other than such exceptions, your description is quite good.
I would not say that installing from source (sometimes referred to as compiling) is just for developers, not by any means. People who run Gentoo install everything by compiling it.
Sometimes, installing from source is the best option. Advanced users may need to pass special arguments to the software source or binaries (or binaries for your package format) may not be available.
When I started with Slackware (now almost seven years ago--wow!), I installed a lot of stuff by compiling it; it's really not a complicated process. Now I mostly use Slackbuilds dot org for Slackware.
Is it really 'installing from source' or is it 'building from source'?
When you install a binary package, you are really simply installing a 'pre-built' package containing the binary files. These are installed from your distro repos using that distro's package manager. As far as I know, all package managers allow you to also download the source files, ie: the source code from their respective repo. These source files can then be configured to suit you needs and then the binary files are built and installed, usually with very simple commands. Note however, that some of these configuration files require initmate knowledge of the development tools to generate and are usually done by the developers for you. They do however, allow you change some very basic build parameters, eg: you may specify a different install path from the default, you may also be able to enable/disable certain options when building the binary files with support for a particular device.
The downside to this is that it makes uninstalls dreadfully difficult because the package manager does not keep track of these manually installed files. Some distros do allow to you combine the build with the creation of a package which is then installed by the package manager. This allows the package manager to keep track of these custom build packages and allows for a clean uninstall/upgrade.
Note however, that simply downloading the source files may not necessarily be enough to do a build since many of these source packages may have several dependency requirements. All of these dependencies must be satisfied for a successful build. Oh, and you also need to install all the development libraries/apps before doing the build. As far as I know, all package managers with the exception of Slackware's would check for and satisfy these dependency issues.
If you want to build a version higher than the one that your repo provides, then it's up to you to ensure that these dependencies are satisfied prior to the build.
As an example: When I switched from Ubuntu to Debian Squeeze a few months ago, I wanted to have the lastest version of Banshee. But the Debian Squeeze repos did not have the source files for the lastest release. This meant I had to download the source files from the banshee site. Before building Banshee, I needed to make sure I had all of the development tools and dependencies installed first. From the requirements, these were:
But once all of the dependencies were in place, the build was successful.
Sometimes, building from source is quite simple, sometimes it can take some doing.
To add my voice to the chorus, but from a slightly different perspective...
(I am going to get there, just give me a little slack to start.)
I'm sure everyone knows, but as a starting point, all software starts as source code. Second, virtually all software must be compiled (i.e. translated from human readable source code to machine language) in order for a computer to run the program. A "binary" is a short-hand way of referring to a program that has been compiled.
So, when a user installs software from a binary package, the user is installing a pre-compiled program for their specific processor (e.g. intel 32/64-bit, amd 32/64-bit, ARM, etc.) The point is, someone, somewhere, at some time compiled the software for your specific platform. The take-away is that that other person made the decision for you about what should and should not be a part of the software.
Compiling and installing from source is for those people that want/need other features or do not want some features that are included in the pre-compiled binary package. Even if the one-size-fits-all binary packages satisfy 99% of the users, the law of averages says that at some point, every user will fall into that 1%.
Compiling from source is not just for developers--it's for the fine-tuned control over what the software on your machine provides for you.
Package maintainers make the same decisions. Ubuntu will install packages in certain locations to fit their idea of where those programs should be (to fit apparmor schemes, SELinux schemes, or whatever). The Slackware maintainers place them in completely different locations for their own reasons. When a user installs a binary package, they must accept the package maintainer's placement--or risk a LOT of complications by trying to move things around after the install.
A typical example: you want to "test drive" a new software version before completely upgrading. Installing from source would allow you to have both versions side-by-side in /opt. You cannot do the same thing using binary packages unless the package maintainer has published a package specifically for that purpose.
may the source be with you
Everything DH just said so well!
Every source package in the repository (for the distributions I use, there may be exceptions)will contain exactly the build configuration and version (and install options - and often script) used for the installation of the binary package. While from source there is a build or compile step, if you select the same build configuration as the maintainer then you have done the same build and install, and the binary installation will be EXACTLY the same (other than the date stamps on the files). I would, therefor, say that "install from source" is an accurate terminology.
Unless you are running a source based distribution, I see no reason for this kind of source package install. For Debian or Fedora/RH based systems (the great majority: including the entire Centos, scientific, and *untu families) it is unjustified. The ONLY reason for downloading and installing from source FROM the repository is if you are going to change the build configuration from the default, apply a patch that is NOT in the repos to change the behavior, or do some custom coding.
I see EXCELLENT reason for installing from source if any of those exceptions apply, or if you need a version that is NOT in your repos and a build and install from original source is your only path. In that case you are going to versions and configurations that your package manager cannot properly manage and maintain as you need them, and you are going to have to take over that management manually anyway. This is something a programmer, SYSADM, or serious power user should have as an option: not something every user should consider.
Advantage: you can custom fit the software for your hardware and usage. (Speed and size optimization)
Disadvantage: it adds to your workload. FOREVER.
An side-by-side source install package example for clarity?
this thread, and esepcially Dark_Helmet's explanation was very informative.
Now I'd really love to have an example of how to maintain a package installed from source.
I think it would help a lot of people if there were a decent hand-holding guide for "maintaining packages installed from source".
However I cant seem to coax Google into finding such a guide for the 2 distros I use - CentOS(server) and Ubuntu(Desktop).
As an example, there is a fine editor / dev environment, that is somehow not popular among linux users for whatever reasons, called SETEdit ( http://setedit.sourceforge.net/ ) which depends on TVision ( http://tvision.sourceforge.net/ ). It is an opensource port of the likeable old TurboPascal IDE from Borland on which a lot of devs learnt programming in university.
Now both need to be compiled from source as the latest versios of CentOS and Ubuntu dont have it (and it doesnt always work to change a distro for an editor / dev env).
So if I want to compile from source tv and setedit (setedit obviously needs tv installed to display the widgets that make up setedit), and I also want to be able to future my installs by allowing for multiple versions of both, then is this the way to do it, roughly...
Then it would install set0.5.4 in /opt/set0.5.4
I think I might have missed a step in telling the installation of set0.5.4 that it should pick up tv2.0.3 from /opt/tv2.0.3
Now imagine, after 6 months, tv project realeases an updated version tv2.0.4 and that actually breaks a nice feature SetX in set0.5.4.
But I want tv2.0.4 for some crucial updates, but I also want my set0.5.4 working fine as it has been.
So this time, I repeat the above steps, but with new locations.
I make-make-install tv2.0.4 to /opt/tv2.0.4 and I compile set0.5.4 (same old version) with tv2.0.4 (which removes feature SetX, but adds some nice bling to tv widgets thmeselves) telling it to pick up tv from /opt/tv2.0.4.
This new set version I install into /opt/set0.5.4-1
So now I have :
This would be really really neat.
I looked it up and find that there is a distro that does this: Gobolinux
But alas! I have to use Centos on server, and Ubuntu on desktop, for several reasons.
So I switching to Gobolinux is out of the question.
The other option I have is using portable linux apps, but that is a long learning curve and a whole project in itself, not a day's work.
And "FATelf" which promised a lot is now closed down.
So, can I pull off the above?
I have faced this scenario a couple of times for packages far more important than setedit - where the single piece of software absolutely essential for the server is not in the official repos or installing other repos isnt allowed.
If this multiple-version side-by-side /opt solution works, that would be the best thing to happen in Linux in some time, and solve a lot of maintenance issues.
Note that I also dont have to bother uninstalling an obsolete version from /opt
I just install a new version and simply don't use the old one!
Apologies for the long post, but this is couldn't posibly have been shortened (without making it decently google-searchable)
Is it better to use /usr/local for this purpose instead of /opt ? I personally think not, because /opt seems cleaner and further away from the distro-maintained core sytem than /usr/local. The further away any custom things, the better. IMO.
Another possiblity that struck me on some thought is why not simply create a /home/user/bin and install both tv and set into it - as so:
And then, adding /home/user/bin to the system path...
That way absolutely nothing is touched in the core system.
Would this way - /home/user/bin work with a server package - like say a web proxy server - nginx / squid / lighttpd / etc?
Or do these simply have to be installed into /opt, /usr/local at best, and from the native package manager at worst?
If I could locally install and compile from source into /home/user/bin/nginx-x.y.A, /home/user/bin/nginx-x.y.B, etc. I could easily test multiple versions and configurations while being fully assured that I broke nothing in the server!
That would be a great thing.
Many thanks for going through this long post, and more so if you can help with a reply.
|All times are GMT -5. The time now is 02:55 AM.|