LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 05-24-2012, 08:55 PM   #1
kodekata
LQ Newbie
 
Registered: Mar 2011
Posts: 6

Rep: Reputation: 0
Trying to understand cross compiling for ARM on x86, and why some distributions don't


Hello all,

My post is pretty long and I have quite a few questions. If you feel the need to tell me to RTFM, please _do_, only link to the manuals I need to read!

Recently I have been noticing interest in ARM, due to the likes of the Raspberry Pi, BeagleBone, and friends. In particular, the RPI seems to be lacking a distribution which packages binaries compiled with the armhf "hard float" support for ARMv6. (I believe Debian and Ubuntu are aiming to ship such software, only for ARMv7 chips.)

I am pretty new to compilers, toolchains and cross-compiling. However I have used SDKs from Palm and Apple which use cross compilers (GCC) to target ARM for C/C++. I know there are "gotchas" to look out for, such as word size, instruction set, endianness, etc. Also, much *nix software is shipped with a "configure" script which attempts to build makefiles and other scripts and assets so as to match the host environment, because the assumption is that the host and target machines are one and the same. So obviously the naive "configure && make && make install" is out of the question.

My real question is why do some distributions, such as Debian and Fedora, have a policy of building packages only with a native toolchain? (Ie., the distribution's packages must be built on the architecture for deployment.)

Is there no effort to sanitize the build environment such that it is as much as possible not dependent on irrelevant system state? Don't pbuilder and the like accomplish this?

And I suppose, what I would really like to know is, how close must the environments be? If I build a gnu toolchain on a posix system (like darwin) on e.g. an ARM Cortex A8 (ARMv7) architecture, could it be safely used to build Linux binaries for the same hardware?

Also, if I want to do "native" compilation with the same software environment (E.g. build debian packages on a debian system), is it sufficient (and necessary?) for the host arch instruction set to be a superset of the target arch's instruction set, given that they have the same endianness and word size?

If you can provide any answers, whether technological, or more political/traditional/historical, I am eager to hear. I know I am asking a lot, but I am reading a lot, and I don't know how many of these are issues with distributions, or the GNU toolchain, or hardware or whatever. Please help me look in the right direction!

Kind Regards,
Tim
 
Old 05-25-2012, 08:12 PM   #2
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Cross development isn't too difficult once you have a good, working cross toolchain. Creating one (or more than one) is the hard part. It's hard for a few reasons:
  • There are a lot of components that all have to work together, and every one of them has a bunch of different versions.
  • There are a lot of different target architectures, and each with particular differences. Problem-squared for Arm architecture.
  • You have to start out from something that is normally expecting to build native code. Some of the things you are building want to be installed over top of the things you're using to build them. Getting things wrong can have very bad effects.
  • It's a big thing to build, and needs a lot of time, effort, disk space and CPU cycles just to perform a fully-working toolchain build.

Quote:
My real question is why do some distributions, such as Debian and Fedora, have a policy of building packages only with a native toolchain? (Ie., the distribution's packages must be built on the architecture for deployment.)
There are just way too many possible cross toolchain configurations for anyone to be able to provide a universal selection. So there are tools like Crosstool-NG that do a good job of taking the pain out of the process. Even using such a tool, you will need to have some patience, persistence, and probably some help.

A correctly built cross toolchain will be absolutely un-influenced by its host architecture. It doesn't matter whether the build host is Linux-x86-64 or Solaris-Sparc or.... The target binaries produced should be identical, using a cross toolchain with identical components, but running on different build host architectures. Having said this, it is probably difficult to get a set of identical toolchain components to build on different build hosts.

As I started out by saying, one you've got a working toolchain, cross-compiling isn't too much different form doing native builds. One of the big headaches is not the lack of cross toolchains in Linux distros, but the expectation by many FOSS developers that their code will always be built using native build tools. The 'configure' step is usually littered with build-host dependencies. This can add a lot of grief to developers. Also, dependency-hell becomes a lot more annoying.

Still think you want to do cross development?
--- rod.
 
Old 05-26-2012, 09:44 PM   #3
kodekata
LQ Newbie
 
Registered: Mar 2011
Posts: 6

Original Poster
Rep: Reputation: 0
Ok, so if I understand you correctly,

Setting up the cross toolchain is difficult, but doable. Apple, Palm, and others have had little reason not to leverage cross development because SDKs and apps written for them are designed for cross development; I.e., they don't poke around the host environment to figure out how something should be built.

If done correctly, the binaries produced by a cross toolchain should be indistinguishable from natively-built binaries.

Its harder in the case of FOSS, because of the hard-coded host environment dependencies in configure scripts, makefiles, and the like.

Distributions may introduce more host dependencies, or at least not mitigate the ones from upstream.

So, in summary, building binaries with a cross toolchain is hard, but packaging (e.g. making .debs) FOSS software on one arch for another is harder.
 
Old 05-26-2012, 10:17 PM   #4
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
I think that's about right. Not sure what you mean by
Quote:
Distributions may introduce more host dependencies, or at least not mitigate the ones from upstream.
I think most cross-development get done in the embedded systems world. There, I don't think there is much emphasis or motivation to be creating packages like .debs and .rpms. Most people that do cross development have custom, proprietary, or at least small target applications. Also, most people doing cross development are pretty comfortable build applications from sources, so if there is some kind of distribution channel, it gets done that way.

I rarely think about any comparison to natively built applications for most targets. The reason I want to do cross development is usually because the target platform is unsuitable for doing development work. Otherwise, what you say is generally true.

--- rod.
 
Old 05-31-2012, 01:44 PM   #5
kodekata
LQ Newbie
 
Registered: Mar 2011
Posts: 6

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by theNbomr View Post
I think that's about right. Not sure what you mean by
Well Ubuntu and Debian have ported their distributions to ARMv7, but apparently require building of binaries (and packaging?) to be done natively. I was just trying to play the Devil's advocate and ask why this is so.
 
Old 05-31-2012, 03:30 PM   #6
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
I think that is simply the standard format for Linux distributions. To put it another way, if someone produced a distribution for any architecture, and you wanted the development system to be non-native, then the universe of possible development/build architectures is very large, and I don't know how you would choose which cross development platform to support. Multiply that by all of the supported architectures, and the matrix of runtime & development architectures becomes unmanageable.

--- rod.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Cross-compiling Alsa sources on x86 for ARM langerak Linux - Kernel 6 08-14-2011 09:18 AM
cross compiling udev for ARM valdez Linux - Software 3 05-15-2011 01:29 PM
Cross compiling for ARM Hachaso Programming 1 06-26-2009 06:07 AM
Problem compiling BlueZ stack on ARM ( cross compilation from linux x86) shilpates Linux - Wireless Networking 1 08-20-2008 03:02 AM
Cross compiling utftpd on x86 for arm SachinTCS Linux - Software 1 04-13-2007 09:50 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 11:46 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration