How are distrubutions rebuilt for different CPU architectures?
Hey there , Nathan here.
I was currently looking at how developers currently go about rebuilding whole distrobutions to be able to run on different architectures? Take Debian for example. the debian community supports powerpc , mips etc... Is every package recompiled for each CPU platform individually? or do they have a massive script they edit and execute to build the different OS's? I'm asking as it seems to me a little bit unintuitive both ways. Thanks for your time. Nathan. |
look at this :
http://landley.net/writing/docs/cross-compiling.html for basics and this for a live coverage :) : http://www.armedslack.org/ https://twitter.com/drmozes |
Thanks for your reply, but i don't see it helping me in any particular way.
I understand the basic's of cross compiling. I was looking at how to build 300> packages for a different architecture. I wound't imagine you would have to do it manually - I however could be wrong... The 'Introduction to cross compiling' is, however a good read. |
Every package is recompiled for each CPU platform.
How else could it be done. |
Aside from the complications associated with cross compiling, it isn't so much different from a native build. Of course, sometimes those complications can be significant. I think a lot of the original development gets done by hardware vendors who want Linux to run on their products; especially any custom hardware drivers, bootloaders, and application-specific code. Cross building kernels is relatively painless.
--- rod. |
See Autobuilder network.
|
Well actually the first time you will compile every single package for a new arch .
Maybe later, if the base has been set up, you can use buildscripts like SlackBuilds in Slackware to have some degree of automatisation. But actually I don't think there is much more one can do than look at each package, sort out it's dependancies, sort out the build order for your toolchain and build one by one. A good example may be Linux from scratch, which describes the process and even has some automatisation sub-projects, but they all need to be adjusted for every major change in the sources or important packages (like xorg or gcc) The link to http://www.armedslack.org/ I put in because you can follow the process where someone has or is porting a distro to a new architecture. The autobuilder network is a example for automatisation been set up for a distro, but it also needs manual work, if a build fails. Another example for this is the Opensuse OBS : http://www.open-build-service.org/ |
Cross-compiling is both commonplace and painless. The output of any compiler is "object code." It doesn't have to be "object code for the architecture that is running the compiler at the time."
For instance, a portable device might not be a very good platform for running the compiler that generates object code for that device. But it does not need to be. Perhaps more commonly: your sexy multi-core latest-greatest Intel chip might be tasked with generating object files for a "vanilla 386." (Very rapidly...) |
You have to do a lot more in most cases than simple compile from sources. Different arches may have any number of less or more components and even as simple as how it reads a number is important. Having the source code is good for a very skilled developer to start with but they have to have a very detailed knowledge of the hardware to create an OS for that device.
|
Thanks for your replies.
I think I understand alot better now. So rebuilding a Distro isn't something to be undertaken by a lonewolf like myself? It sounds like it takes alot of effort to rebuild something by the likes of fedora or ubuntu for different architectures. I currently followed the LFS book , but after spending 6 weeks on it, restarting the project 8-9 times, I eventually grew tired of it. I'm actually trying as my first C++ project to build my own distro from Tinycore and build my own Windows Manager. Wish me luck! Hopefully I will get something at the end of it, rather then a mangled ball of C++ code :D |
Quote:
|
Quote:
Quote:
Quote:
|
Quote:
Throw in dependencies on standard C libraries, kernel versions, compiler versions (newer generally means stricter; what compiled on older compilers often won't on newer compilers), and the sheer volume of code, and you're bound to have problems. Just building a cross toolchain is an exercise not for the faint of heart. Nope, it isn't easy. Quote:
--- rod. |
deleted.
|
All times are GMT -5. The time now is 02:37 PM. |