How are distrubutions rebuilt for different CPU architectures?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How are distrubutions rebuilt for different CPU architectures?
Hey there , Nathan here.
I was currently looking at how developers currently go about rebuilding whole distrobutions to be able to run on different architectures?
Take Debian for example. the debian community supports powerpc , mips etc...
Is every package recompiled for each CPU platform individually? or do they have a massive script they edit and execute to build the different OS's?
I'm asking as it seems to me a little bit unintuitive both ways.
Thanks for your reply, but i don't see it helping me in any particular way.
I understand the basic's of cross compiling. I was looking at how to build 300> packages for a different architecture.
I wound't imagine you would have to do it manually - I however could be wrong...
The 'Introduction to cross compiling' is, however a good read.
Aside from the complications associated with cross compiling, it isn't so much different from a native build. Of course, sometimes those complications can be significant. I think a lot of the original development gets done by hardware vendors who want Linux to run on their products; especially any custom hardware drivers, bootloaders, and application-specific code. Cross building kernels is relatively painless.
--- rod.
Well actually the first time you will compile every single package for a new arch .
Maybe later, if the base has been set up, you can use buildscripts like SlackBuilds in Slackware to have some degree of automatisation.
But actually I don't think there is much more one can do than look at each package, sort out it's dependancies, sort out the build order for your toolchain and build one by one.
A good example may be Linux from scratch, which describes the process and even has some automatisation sub-projects, but they all need to be adjusted for every major change in the sources or important packages (like xorg or gcc)
The link to http://www.armedslack.org/ I put in because you can follow the process where someone has or is porting a distro to a new architecture.
The autobuilder network is a example for automatisation been set up for a distro, but it also needs manual work, if a build fails.
Cross-compiling is both commonplace and painless. The output of any compiler is "object code." It doesn't have to be "object code for the architecture that is running the compiler at the time."
For instance, a portable device might not be a very good platform for running the compiler that generates object code for that device. But it does not need to be.
Perhaps more commonly: your sexy multi-core latest-greatest Intel chip might be tasked with generating object files for a "vanilla 386." (Very rapidly...)
You have to do a lot more in most cases than simple compile from sources. Different arches may have any number of less or more components and even as simple as how it reads a number is important. Having the source code is good for a very skilled developer to start with but they have to have a very detailed knowledge of the hardware to create an OS for that device.
Thanks for your replies.
I think I understand alot better now.
So rebuilding a Distro isn't something to be undertaken by a lonewolf like myself?
It sounds like it takes alot of effort to rebuild something by the likes of fedora or ubuntu for different architectures.
I currently followed the LFS book , but after spending 6 weeks on it, restarting the project 8-9 times, I eventually grew tired of it.
I'm actually trying as my first C++ project to build my own distro from Tinycore and build my own Windows Manager.
Wish me luck! Hopefully I will get something at the end of it, rather then a mangled ball of C++ code
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by jefro
You have to do a lot more in most cases than simple compile from sources. Different arches may have any number of less or more components and even as simple as how it reads a number is important. Having the source code is good for a very skilled developer to start with but they have to have a very detailed knowledge of the hardware to create an OS for that device.
I thought whomever creates the compiler is the one dealing with the endedness and instruction set? Surely source code is source code and once the compiler is ported (non-trivial, of course)?
So rebuilding a Distro isn't something to be undertaken by a lonewolf like myself?
It sounds like it takes a lot of effort to rebuild something by the likes of fedora or ubuntu for different architectures.
Indeed, that is what the distro managers do for all of us, and it is in fact an expert undertaking.
Quote:
I currently followed the LFS book , but after spending 6 weeks on it, restarting the project 8-9 times, I eventually grew tired of it.
As did I. Then, I discovered Gentoo, which is also a source-code based distro, albeit of a very different form and purpose. LFS (= Linux From Scratch) is in my opinion primarily an educational exercise, and as such I consider it to be a priceless exercise that every Linux geek should go through more than once.
Quote:
I'm actually trying as my first C++ project to build my own distro from Tinycore and build my own Windows Manager.
Wish me luck! Hopefully I will get something at the end of it, rather then a mangled ball of C++ code
A noble undertaking, and please do advise us of your progress. "Good luck!" It's perfectly reasonable to regard whatever you come up with as "a mangled ball," partly because every programmer looks upon his work more-or-less that way. Still, it will be well worth doing and we do look forward to it. As should you.
I thought whomever creates the compiler is the one dealing with the endedness and instruction set? Surely source code is source code and once the compiler is ported (non-trivial, of course)?
Source code isn't always portable. There are many many ways to build hardware dependencies into code, and many different ways of running a compiler. In ARM CPU's alone, there are a plethora of combinations of features: endian-ness, ABIs, floating point support, memory management support, ARM versions, etc.
Throw in dependencies on standard C libraries, kernel versions, compiler versions (newer generally means stricter; what compiled on older compilers often won't on newer compilers), and the sheer volume of code, and you're bound to have problems. Just building a cross toolchain is an exercise not for the faint of heart.
Nope, it isn't easy.
Quote:
Originally Posted by naf546
I'm actually trying as my first C++ project to build my own distro from Tinycore
I did that about a year ago. I was amazed how much work it was, even though someone else had done all the heavy lifting. I had previously looked long and hard at many other distros, as well as at the possibility of rolling my own from scratch. I have no doubt whatsoever that TinyCore (actually the now defunct & GUI-less Microcore) was the best choice.
--- rod.
Last edited by theNbomr; 08-13-2012 at 08:53 AM.
Reason: typos fixed
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.