Binary compatibility across kernels, Linux distributions, libraries
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Binary compatibility across kernels, Linux distributions, libraries
I am using a central development host to build binary executables, which are then deployed into production roles on other Linux hosts. Until recently, the development host & production hosts were from the same Linux distribution (Scientific Linux 4.x). With the release of a newer version of Scientific Linux (5.x), it has become desirable to deploy production hosts on the newer version. This raises the question of whether it is acceptable to simply copy the binary executables to the production hosts, given the different kernels (all 2.6.xxx versions), system library versions, and possibly other unknown differences.
What are the rules for predicting binary compatibility in this scenario? The executables are all built from C/C++ sources using gcc/g++, and some are statically linked, others use shared object libraries. Where the binaries are linked against system shared object libraries, the problem appears to be solved by creating symbolic links with the required names/versions to the newer libraries, but I want to be sure of whether this is a proper solution. Scientific Linux is a re-build clone of RHEL.
It is near impossible to create duplicate development systems for all versions of Linux that will be used in production, so that is the very least preferred solution, if one is required. It may be possible to modify the build system (which is already very complex) to cross-compile/build for alternate system configurations, but this, too, would be an onerous task.
Does static linking provide any immunity to binary incompatibility, at all?
The scenario in question is for products used in-house only, so there is a fair degree of control of the number of variables. The products in question are used in a large machine control system, so the consequence of software failure can be significant. I'm hoping someone can point me at a definitive reference document that can provide answers to these questions. Input from people with firsthand knowledge in these matters would be greatly appreciated.
--- rod.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
There is by design no guarantee of compatibility between GNU/Linux kernels, distributions and/or releases. Of course, that doesn't means it won't work, it usually does, but you are on your own.
Statically linking isn't recommended. Your application may break if the statically linked objects aren't compatible with the current kernel. You'll end up with larger binaries bundling libraries with potential vulnerabilities that can't be patched.
The only reliable way is to build a package for each supported distribution release.
Jillagre, thank you for your input.
Upon further reflection, I note that many of us routinely upgrade kernels, often through automated processes such as nightly yum updates (this is not enabled on the systems I described above). Given what you say, it would seem that doing so would put any and all installed binaries in peril of breaking. Are you saying that this is a behavior we should expect as a possible result of kernel upgrades? I think most people would be surprised at that; I know it isn't something I would expect, except for new bugs that get introduced with new code. Without using 'crash & burn' testing, there must be some way of predicting the likelihood of a kernel or system library upgrade being compatible with installed object code.
With respect to static vs. dynamic linking, it seems to me that one case creates a possible conflict between the kernel and the static binary image, whereas the other case creates a potential clash between the image binary and the installed (or absent) libraries. Is this your belief, and if so, should a person prefer one possibility of failure over another?
I this instance, correcting faulty code is always done by a complete rebuild from source, and patching on the target production host is never done. The larger size of statically linked binaries is seen as inconsequential in the environment to which I am referring.
You should have very few problems with C code, but C++ carries the additional dependencies of gcc/stdlibc++ libraries which may cause problems when the target distro is using a different version of glibc/gcc. Kernel version differences will only rarely cause problems.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
Originally Posted by theNbomr
Upon further reflection, I note that many of us routinely upgrade kernels, often through automated processes such as nightly yum updates (this is not enabled on the systems I described above). Given what you say, it would seem that doing so would put any and all installed binaries in peril of breaking. Are you saying that this is a behavior we should expect as a possible result of kernel upgrades? I think most people would be surprised at that; I know it isn't something I would expect, except for new bugs that get introduced with new code.
That wasn't what I meant. Compatibility in that case is usually there and breaking it is generally considered as a bug when the upgrade concern a minor version. A major version upgrade is more likely to provide incompatible changes. However, the applications are often relying on uncommitted/non standard interfaces and this fact only shows up when the interface disappears or change after an upgrade.
For example, two years ago, Ubuntu switched from bash to dash at their default shell (/bin/sh). That was a smart move because dash is faster than bash and is complying with the Linux specification (LSB) that states /bin/sh must be POSIX compliant. However, It broke a large number of applications that were wrongly assuming /bin/sh was /bin/bash and relying on bash "proprietary" features a.k.a. bashisms.
Also, you do not tell what kind of software you develop and what are the APIs it relies on. If you are writing a device driver (i.e. a kernel module), then the risk it breaks after a kernel upgrade is real.
Quote:
Without using 'crash & burn' testing, there must be some way of predicting the likelihood of a kernel or system library upgrade being compatible with installed object code.
I don't see any 100% reliable alternative.
Quote:
With respect to static vs. dynamic linking, it seems to me that one case creates a possible conflict between the kernel and the static binary image, whereas the other case creates a potential clash between the image binary and the installed (or absent) libraries.
It is not just the kernel but any dependency the library may have to some file, feature, behavior or service that might have evolved or is implement differently between distributions.
That wasn't what I meant. Compatibility in that case is usually there and breaking it is generally considered as a bug when the upgrade concern a minor version. A major version upgrade is more likely to provide incompatible changes. However, the applications are often relying on uncommitted/non standard interfaces and this fact only shows up when the interface disappears or change after an upgrade.
Can you explain your use of the terms major & minor versions? Are you referring to kernel versions, distro name versions, library versioning, other?
I seem to recall reading about the meaning of the various numerical elements of the kernel version numbers, as it relates to upward compatibility. Do you know this convention or where I might find it documented? Can one apply similar logic to distro & library version naming as a predictor of compatibility?
Quote:
Also, you do not tell what kind of software you develop and what are the APIs it relies on. If you are writing a device driver (i.e. a kernel module), then the risk it breaks after a kernel upgrade is real.
I don't see any 100% reliable alternative.
It is not just the kernel but any dependency the library may have to some file, feature, behavior or service that might have evolved or is implement differently between distributions.
The code in question uses mostly standard C library API, with a fair degree of Berkeley Sockets code, POSIX threads, POSIX IPC, and a small number of simpler common APIs such as readline, getopt, curses and the like. To my mind, these APIs seem likely to be quite stable by this point in their history.
The code comprises a distributed control system used in experimental physics.
--- rod.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
Originally Posted by theNbomr
Can you explain your use of the terms major & minor versions? Are you referring to kernel versions, distro name versions, library versioning, other?
All of the above. It is a widespread convention in the software engineering world.
Quote:
I seem to recall reading about the meaning of the various numerical elements of the kernel version numbers, as it relates to upward compatibility. Do you know this convention or where I might find it documented? Can one apply similar logic to distro & library version naming as a predictor of compatibility?
The code in question uses mostly standard C library API, with a fair degree of Berkeley Sockets code, POSIX threads, POSIX IPC, and a small number of simpler common APIs such as readline, getopt, curses and the like. To my mind, these APIs seem likely to be quite stable by this point their history.
Indeed. Sticking to POSIX APIs is obviously a good practice when portability/compatibility is a requirement.
All of the above. It is a widespread convention in the software engineering world.
Absolutely. http://en.wikipedia.org/wiki/Software_versioning
Indeed. Sticking to POSIX APIs is obviously a good practice when portability/compatibility is a requirement.
Yes, I understand the concept as used in general terms, but was looking for specific rules about what degree of compatibility to expect across revisions and revision types. And specific to the usage related to the kernel and GNU-style libraries.
I would never consider putting code into production across even a minor version change (as described in the wikipedia article), no less a major one. If the 'build' number only changes for bug fixes, then one should not expect compatibility issues across 'builds', except the hopefully rare cases where bugs fixes break dependent applications.
Thanks for your input.
--- rod.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.