LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 05-21-2008, 10:30 AM   #1
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Binary compatibility across kernels, Linux distributions, libraries


I am using a central development host to build binary executables, which are then deployed into production roles on other Linux hosts. Until recently, the development host & production hosts were from the same Linux distribution (Scientific Linux 4.x). With the release of a newer version of Scientific Linux (5.x), it has become desirable to deploy production hosts on the newer version. This raises the question of whether it is acceptable to simply copy the binary executables to the production hosts, given the different kernels (all 2.6.xxx versions), system library versions, and possibly other unknown differences.
What are the rules for predicting binary compatibility in this scenario? The executables are all built from C/C++ sources using gcc/g++, and some are statically linked, others use shared object libraries. Where the binaries are linked against system shared object libraries, the problem appears to be solved by creating symbolic links with the required names/versions to the newer libraries, but I want to be sure of whether this is a proper solution. Scientific Linux is a re-build clone of RHEL.
It is near impossible to create duplicate development systems for all versions of Linux that will be used in production, so that is the very least preferred solution, if one is required. It may be possible to modify the build system (which is already very complex) to cross-compile/build for alternate system configurations, but this, too, would be an onerous task.
Does static linking provide any immunity to binary incompatibility, at all?
The scenario in question is for products used in-house only, so there is a fair degree of control of the number of variables. The products in question are used in a large machine control system, so the consequence of software failure can be significant. I'm hoping someone can point me at a definitive reference document that can provide answers to these questions. Input from people with firsthand knowledge in these matters would be greatly appreciated.
--- rod.

Last edited by theNbomr; 05-21-2008 at 10:33 AM.
 
Old 05-21-2008, 12:30 PM   #2
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
There is by design no guarantee of compatibility between GNU/Linux kernels, distributions and/or releases. Of course, that doesn't means it won't work, it usually does, but you are on your own.

Statically linking isn't recommended. Your application may break if the statically linked objects aren't compatible with the current kernel. You'll end up with larger binaries bundling libraries with potential vulnerabilities that can't be patched.

The only reliable way is to build a package for each supported distribution release.
 
Old 05-21-2008, 01:27 PM   #3
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399

Original Poster
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Jillagre, thank you for your input.
Upon further reflection, I note that many of us routinely upgrade kernels, often through automated processes such as nightly yum updates (this is not enabled on the systems I described above). Given what you say, it would seem that doing so would put any and all installed binaries in peril of breaking. Are you saying that this is a behavior we should expect as a possible result of kernel upgrades? I think most people would be surprised at that; I know it isn't something I would expect, except for new bugs that get introduced with new code. Without using 'crash & burn' testing, there must be some way of predicting the likelihood of a kernel or system library upgrade being compatible with installed object code.
With respect to static vs. dynamic linking, it seems to me that one case creates a possible conflict between the kernel and the static binary image, whereas the other case creates a potential clash between the image binary and the installed (or absent) libraries. Is this your belief, and if so, should a person prefer one possibility of failure over another?
I this instance, correcting faulty code is always done by a complete rebuild from source, and patching on the target production host is never done. The larger size of statically linked binaries is seen as inconsequential in the environment to which I am referring.

--- rod.

Last edited by theNbomr; 05-21-2008 at 01:28 PM.
 
Old 05-21-2008, 01:43 PM   #4
gnashley
Amigo developer
 
Registered: Dec 2003
Location: Germany
Distribution: Slackware
Posts: 4,928

Rep: Reputation: 612Reputation: 612Reputation: 612Reputation: 612Reputation: 612Reputation: 612
You should have very few problems with C code, but C++ carries the additional dependencies of gcc/stdlibc++ libraries which may cause problems when the target distro is using a different version of glibc/gcc. Kernel version differences will only rarely cause problems.
 
Old 05-22-2008, 11:01 AM   #5
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
Originally Posted by theNbomr View Post
Upon further reflection, I note that many of us routinely upgrade kernels, often through automated processes such as nightly yum updates (this is not enabled on the systems I described above). Given what you say, it would seem that doing so would put any and all installed binaries in peril of breaking. Are you saying that this is a behavior we should expect as a possible result of kernel upgrades? I think most people would be surprised at that; I know it isn't something I would expect, except for new bugs that get introduced with new code.
That wasn't what I meant. Compatibility in that case is usually there and breaking it is generally considered as a bug when the upgrade concern a minor version. A major version upgrade is more likely to provide incompatible changes. However, the applications are often relying on uncommitted/non standard interfaces and this fact only shows up when the interface disappears or change after an upgrade.
For example, two years ago, Ubuntu switched from bash to dash at their default shell (/bin/sh). That was a smart move because dash is faster than bash and is complying with the Linux specification (LSB) that states /bin/sh must be POSIX compliant. However, It broke a large number of applications that were wrongly assuming /bin/sh was /bin/bash and relying on bash "proprietary" features a.k.a. bashisms.

Also, you do not tell what kind of software you develop and what are the APIs it relies on. If you are writing a device driver (i.e. a kernel module), then the risk it breaks after a kernel upgrade is real.
Quote:
Without using 'crash & burn' testing, there must be some way of predicting the likelihood of a kernel or system library upgrade being compatible with installed object code.
I don't see any 100% reliable alternative.
Quote:
With respect to static vs. dynamic linking, it seems to me that one case creates a possible conflict between the kernel and the static binary image, whereas the other case creates a potential clash between the image binary and the installed (or absent) libraries.
It is not just the kernel but any dependency the library may have to some file, feature, behavior or service that might have evolved or is implement differently between distributions.

Last edited by jlliagre; 05-22-2008 at 11:02 AM.
 
Old 05-22-2008, 12:40 PM   #6
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399

Original Poster
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Quote:
Originally Posted by jlliagre View Post
That wasn't what I meant. Compatibility in that case is usually there and breaking it is generally considered as a bug when the upgrade concern a minor version. A major version upgrade is more likely to provide incompatible changes. However, the applications are often relying on uncommitted/non standard interfaces and this fact only shows up when the interface disappears or change after an upgrade.
Can you explain your use of the terms major & minor versions? Are you referring to kernel versions, distro name versions, library versioning, other?
I seem to recall reading about the meaning of the various numerical elements of the kernel version numbers, as it relates to upward compatibility. Do you know this convention or where I might find it documented? Can one apply similar logic to distro & library version naming as a predictor of compatibility?
Quote:
Also, you do not tell what kind of software you develop and what are the APIs it relies on. If you are writing a device driver (i.e. a kernel module), then the risk it breaks after a kernel upgrade is real.
I don't see any 100% reliable alternative.
It is not just the kernel but any dependency the library may have to some file, feature, behavior or service that might have evolved or is implement differently between distributions.
The code in question uses mostly standard C library API, with a fair degree of Berkeley Sockets code, POSIX threads, POSIX IPC, and a small number of simpler common APIs such as readline, getopt, curses and the like. To my mind, these APIs seem likely to be quite stable by this point in their history.
The code comprises a distributed control system used in experimental physics.
--- rod.

Last edited by theNbomr; 05-22-2008 at 05:00 PM.
 
Old 05-22-2008, 03:48 PM   #7
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
Originally Posted by theNbomr View Post
Can you explain your use of the terms major & minor versions? Are you referring to kernel versions, distro name versions, library versioning, other?
All of the above. It is a widespread convention in the software engineering world.
Quote:
I seem to recall reading about the meaning of the various numerical elements of the kernel version numbers, as it relates to upward compatibility. Do you know this convention or where I might find it documented? Can one apply similar logic to distro & library version naming as a predictor of compatibility?
Absolutely.
http://en.wikipedia.org/wiki/Software_versioning
Quote:
The code in question uses mostly standard C library API, with a fair degree of Berkeley Sockets code, POSIX threads, POSIX IPC, and a small number of simpler common APIs such as readline, getopt, curses and the like. To my mind, these APIs seem likely to be quite stable by this point their history.
Indeed. Sticking to POSIX APIs is obviously a good practice when portability/compatibility is a requirement.
 
Old 05-22-2008, 04:59 PM   #8
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399

Original Poster
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Quote:
Originally Posted by jlliagre View Post
All of the above. It is a widespread convention in the software engineering world.
Absolutely.
http://en.wikipedia.org/wiki/Software_versioning
Indeed. Sticking to POSIX APIs is obviously a good practice when portability/compatibility is a requirement.
Yes, I understand the concept as used in general terms, but was looking for specific rules about what degree of compatibility to expect across revisions and revision types. And specific to the usage related to the kernel and GNU-style libraries.
I would never consider putting code into production across even a minor version change (as described in the wikipedia article), no less a major one. If the 'build' number only changes for bug fixes, then one should not expect compatibility issues across 'builds', except the hopefully rare cases where bugs fixes break dependent applications.
Thanks for your input.
--- rod.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Binary package on multiple Linux distributions Tux-Slack Programming 5 12-01-2007 11:29 AM
backward compatibility in kernels + custom kernels + more queries mmp_3341 Linux - Kernel 1 04-12-2007 07:28 AM
Cross compiling to Windows binary, including Linux shared libraries Siiiiiii Programming 2 03-09-2007 12:36 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 10:50 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration