LinuxQuestions.org
LinuxAnswers - the LQ Linux tutorial section.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices

Reply
 
Search this Thread
Old 09-03-2012, 03:00 AM   #1
ReaperX7
Senior Member
 
Registered: Jul 2011
Distribution: LFS-SVN, FreeBSD 10.0
Posts: 3,218
Blog Entries: 15

Rep: Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832
Question Slackware from Scratch(?)


And now from out of left field...

More or less this question aimed kinda at Patrick and the more knowledgeable Slackware personnel, but if it could be done, is there a way to completely build Slackware from the ground up straight from it's own source tree, and if so, what is (if there is one), the recommended procedure and steps for getting everything compiled and installed from scratch from start to finish possibly using the Slackware Install disk (if it can be done this way)?
 
Old 09-03-2012, 04:55 AM   #2
NonNonBa
Member
 
Registered: Aug 2010
Distribution: Slackware
Posts: 61

Rep: Reputation: 21
IANPV*, but what is sure is that you can't use the install disk to compile anything : there's no compiler, no header of any kind, and the glibc's stripped to the shared library.

The better way to begin what you request is IMHO to install a minimal Slackware, reduced to the toolchain, then to re-build each component using the official SlackBuilds (follow the Linux from Scratch manual to know what you need to build in which order at each step).



* I Am (of course) Not Patrick Volkerding.
 
Old 09-03-2012, 05:27 AM   #3
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 5,259

Rep: Reputation: Disabled
I am doing exactly that as an exercise. I am creating a new ARM port which re-uses nothing at all from the ARMedslack distribution. There is no "build recipe" for Slackware, so I had to find out (with ample suggestions from Patrick) what I needed to take care of. Building a minimal root filesystem which contains all the needed build tools is something I have automated through scripting: I can cross-compile an ARM rootfs in one command, on my x86_64 server. It is not that difficult to change that into creating a minimal rootfs for another archiitecture, including x86 or x86_64.

So, building the toolchain from scratch is pretty doable. Building Slackware from scratch is a lot harder because of all the dependencies you have to pick up along the way. Slackware pkgtools do not handle dependency checking, which means that no dependency information at all is recorded in the packages or even build scripts. It is something you have to find out by trial and error and a lot of documenting.

I am still not finished, since I stopped having time for this around april 2012... but I intend to pick up the port again after Slackware 14 has been released.

The problem with a "Slackware from scratch" is that several of the packages in a Slackware tree will not compile if you would try them now. This has no effect on the binaries in Slackware - the current packages work fine, even if recompiling them is sometimes impossible without applying patches for newer compilers and C libraries etc...

The Slackware binary distrubution is an evolutionary hand-crafted piece of art. Slackware is never recompiled in full, for a new release.

If you want to be absolutely certain that every SlackBuild in the source tree can compile a package, you would have to deploy a continuous build server like Hudson, Jenkins or Cruise Control. But that is just not the way Slackware evolves.

Eric
 
4 members found this post helpful.
Old 09-03-2012, 10:32 PM   #4
Cultist
Member
 
Registered: Feb 2010
Location: Chicago, IL
Distribution: Slackware64 14.1
Posts: 777

Rep: Reputation: 102Reputation: 102
Quote:
Originally Posted by Alien Bob View Post
I am doing exactly that as an exercise. I am creating a new ARM port which re-uses nothing at all from the ARMedslack distribution. There is no "build recipe" for Slackware, so I had to find out (with ample suggestions from Patrick) what I needed to take care of. Building a minimal root filesystem which contains all the needed build tools is something I have automated through scripting: I can cross-compile an ARM rootfs in one command, on my x86_64 server. It is not that difficult to change that into creating a minimal rootfs for another archiitecture, including x86 or x86_64.

So, building the toolchain from scratch is pretty doable. Building Slackware from scratch is a lot harder because of all the dependencies you have to pick up along the way. Slackware pkgtools do not handle dependency checking, which means that no dependency information at all is recorded in the packages or even build scripts. It is something you have to find out by trial and error and a lot of documenting.

I am still not finished, since I stopped having time for this around april 2012... but I intend to pick up the port again after Slackware 14 has been released.

The problem with a "Slackware from scratch" is that several of the packages in a Slackware tree will not compile if you would try them now. This has no effect on the binaries in Slackware - the current packages work fine, even if recompiling them is sometimes impossible without applying patches for newer compilers and C libraries etc...

The Slackware binary distrubution is an evolutionary hand-crafted piece of art. Slackware is never recompiled in full, for a new release.

If you want to be absolutely certain that every SlackBuild in the source tree can compile a package, you would have to deploy a continuous build server like Hudson, Jenkins or Cruise Control. But that is just not the way Slackware evolves.

Eric
Not to change the topic, but I was curious if you intend to make a Pi image for your new ARM build when it's ready? From what I understand ARMedslack isn't easy to put on the Pi (mine won't arrive for another 2-3 weeks so I don't know this from personal experience).
 
Old 09-03-2012, 10:42 PM   #5
NeoMetal
Member
 
Registered: Aug 2004
Location: MD
Distribution: Slackware
Posts: 106

Rep: Reputation: 19
I got armedslack running on the pi very quickly using images someone put up here: http://www.raspberrypi.org/phpBB3/vi...c316a67dd6025a


There is also an installer linked later in the thread that might achieve more optimal results but will take a while.

That being said it doesn't take advantage of the HFP on the pi among other things
 
Old 09-03-2012, 10:56 PM   #6
ReaperX7
Senior Member
 
Registered: Jul 2011
Distribution: LFS-SVN, FreeBSD 10.0
Posts: 3,218
Blog Entries: 15

Original Poster
Rep: Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832
Eric you bring up an interesting topic regarding how package are built but not against the same toolchains.

I had figured through usage this was only done on rare instances, and hardly, if ever actually done as you said patches would need to be made and applied. I had noticed even on the LFS documentation that they had several upstream patches mostly for the compilers that needed to be applied before compile time, some of which are not even used in Slackware, and some that are.

However, this does raise a concern that maybe there should be, at least maybe on an annual or bi-annual basis, a full rebuild from source for, at minimum, the base packages just to test and make sure everything could be rebuilt if ever deemed necessary, plausible, or remotely viable for the core system.

Basically, more or less do something like FreeBSD where they do a code freeze and a full audit of the base system sources prior to releases.

Actually I'm kinda shocked Linux distributions aren't ever really audited this way. Hmm...

Last edited by ReaperX7; 09-03-2012 at 11:28 PM.
 
1 members found this post helpful.
Old 09-04-2012, 02:00 AM   #7
damgar
Senior Member
 
Registered: Sep 2009
Location: dallas, tx
Distribution: Slackware - current multilib/gsb Arch
Posts: 1,949
Blog Entries: 8

Rep: Reputation: 201Reputation: 201Reputation: 201
There have been several attempts at this, documented on LQ over the last few years. If it's something you really want to do, you might do a quick search, although what Eric just said always comes up. Sometimes the threads gets quite ugly if I remember right.
 
Old 09-04-2012, 03:02 AM   #8
ReaperX7
Senior Member
 
Registered: Jul 2011
Distribution: LFS-SVN, FreeBSD 10.0
Posts: 3,218
Blog Entries: 15

Original Poster
Rep: Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832
It's all based on the willingness to do the deed really. If the user says whatever, we as users have to realize this, whatever Patrick does, is done. Although many people would say code freezes are bad, they need to realize that quality comes before quantity and without quality you have crap... period.

In fact LFS actually has long periods of code freezes for the suggested packages they use for their stable build system and book. You should see their patch list as well.

Taking an audit and freezing the code and packages of at least the base system, and maybe the base development system, to ensure things can be completely up-to-date and fully ready for the simple to the advanced isn't hurting anything. In fact it can solve many problems like bugs, broken packages, compiler issues, etc. How else does FreeBSD remain a solid operating system? Ports, as they have openly admitted, can be broken and some often get broken requiring many upstream patches and fixes, but through the code freeze and audit of the base and base dev packages of the system, they ensure that whatever patches are needed, can be gathered, packages can be readily patched, and a quality system can be ensured for the end users.

This is where the Slackware Tree could use a fork out of -Current at the end of the BETA cycle. When the time for a new release draws near, migrate everything in Slackware-Current over to Slackware-Stable for the base and development packages, freeze the updates to those packages, and then do this... release SlackBuilds and Sources only in the -Current branch and full packages in the -Stable branch compiled against the frozen base and base-dev system. During the RC phase patch the packages from upstream only as needed for the compiler issues (if they arise), security updates, and stability improvements, and then once the system is declared stable, migrate to Slackware-Release and assign a version number to the completed frozen code, migrate all updated packages from Slackware-Stable back to Slackware-Current that were patched and upgraded, remove the SlackBuilds, and then remove Slackware-Stable and begin active updates again to Slackware-Current with packages.

This is not a proposal for doing anything by the way, just a discussion on what could be done to make things better in some ways we might not have thought about.

It's like an insurance policy, and realistically, we may have never considered that there may come a day when we may be required to rebuild Slackware from the ground up. We hope that day doesn't come, but as any one worth their salt will tell you, hope for the best, and prepare for the worst.

I would hope this, or something like this, has been considered for a future project for Slackware, but never acted upon, yet.

As much time as I've spent around the land of UNIX, I have seen things most people, especially one guy who's name I don't need to bring up again but we all know and can't stand, don't see which is each UNIX and UNIX-like system can contribute to each other, even as small and as simple as ideas and methods, not just software.

I, myself, am no coder, nor a person who could fashion a distribution, but even I'll admit, if UNIX and UNIX-like systems of all types could work in cooperation and collaboration, and not squabble over licenses and petty personal issues developers toss in the mix, UNIX and any UNIX-like system out there can be a great operating system, environment, and experience for users and admins alike.

Me, I'm seeing an idea done by BSD and was thinking of how it could apply to a Linux distrbution. That's innovation, even if not directly affecting the kernel and operating system. Call me a heretic for thinking that, but who cares? It's part of the UNIX philosophy. Do things simple and do things right.

Last edited by ReaperX7; 09-04-2012 at 03:19 AM.
 
1 members found this post helpful.
Old 09-04-2012, 04:18 AM   #9
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 5,259

Rep: Reputation: Disabled
Quote:
Originally Posted by ReaperX7 View Post
However, this does raise a concern that maybe there should be, at least maybe on an annual or bi-annual basis, a full rebuild from source for, at minimum, the base packages just to test and make sure everything could be rebuilt if ever deemed necessary, plausible, or remotely viable for the core system.
I do not see the relevance for Slackware?

Slackware is not a "build it from source" distro. Slackware is not even deterministically built from source. the Slackware is following a natural evolutionary track if you want.
The focus is on binary compatibility between the various components which make up the distro. You do not have to rebuild everything if you change (parts of) the tool chain. Only those components that need recompiling (to ensure binary compatibility) will be rebuilt. That is why you often see packages like the kernel, gcc, glibc, binutils and the like in the same ChangeLog update. We do not rebuild packages just to see if they can still be built. I mentioned Jenkins, Hudson and the like as tools to facilitate a possible process in distro maintenance, but that did not imply that Slackware should or even would follow.

Quote:
The basic Slackware maintenance philosophy is: if it ain't broke, don't fix it.
Eric
 
Old 09-04-2012, 04:23 AM   #10
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 5,259

Rep: Reputation: Disabled
Quote:
Originally Posted by ReaperX7 View Post
This is where the Slackware Tree could use a fork out of -Current at the end of the BETA cycle. When the time for a new release draws near, migrate everything in Slackware-Current over to Slackware-Stable for the base and development packages, freeze the updates to those packages, and then do this... release SlackBuilds and Sources only in the -Current branch and full packages in the -Stable branch compiled against the frozen base and base-dev system. During the RC phase patch the packages from upstream only as needed for the compiler issues (if they arise), security updates, and stability improvements, and then once the system is declared stable, migrate to Slackware-Release and assign a version number to the completed frozen code, migrate all updated packages from Slackware-Stable back to Slackware-Current that were patched and upgraded, remove the SlackBuilds, and then remove Slackware-Stable and begin active updates again to Slackware-Current with packages.
The size of the Slackware team will prevent this kind of additional load to be accepted.

Quote:
It's like an insurance policy, and realistically, we may have never considered that there may come a day when we may be required to rebuild Slackware from the ground up. We hope that day doesn't come, but as any one worth their salt will tell you, hope for the best, and prepare for the worst.
Can you give any real-life scenario where this would become a relevant question?

Eric
 
1 members found this post helpful.
Old 09-04-2012, 05:37 AM   #11
ReaperX7
Senior Member
 
Registered: Jul 2011
Distribution: LFS-SVN, FreeBSD 10.0
Posts: 3,218
Blog Entries: 15

Original Poster
Rep: Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832Reputation: 832
I understand Eric and yes it would be a massive undertaking but the question other than evolution of existing packages and basic binary compatibility, could Slackware at the core of the system hypothetically ever be frozen in code and completely rebuilt as an audit of the existing code?
 
Old 09-04-2012, 06:08 AM   #12
BlackRider
Member
 
Registered: Aug 2011
Distribution: Slackware
Posts: 261

Rep: Reputation: 82
Quote:
Although many people would say code freezes are bad, they need to realize that quality comes before quantity and without quality you have crap... period.

I have found crappy results from some distributions which enforce freezing policies. What makes quality is not the method itself, but the quality standard the developers have set: if the policy freezes the system for six months but only fixes for "hardware-killer-bugs" are accepted, your end release will contain tons of software critical bugs.*

Slackware is successful here because they keep a golden quality standard, changes are made carefully and system clearness is looked at, while "current" is actually tested the whole live of the release.

On the other hand, I really like the idea of a side team of volunteers making SFS (slackware from scratch). This means more eyes looking at the innards of the distribution and making useful suggestions or otherwise collaborating with Slackware Inc. without placing more workload on the main developers.

The problem... you need the volunteers to do so :-)

*Based on a true story.

Last edited by BlackRider; 09-04-2012 at 06:19 AM.
 
Old 09-04-2012, 06:42 AM   #13
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 5,259

Rep: Reputation: Disabled
Quote:
Originally Posted by BlackRider View Post
On the other hand, I really like the idea of a side team of volunteers making SFS (slackware from scratch). This means more eyes looking at the innards of the distribution and making useful suggestions or otherwise collaborating with Slackware Inc. without placing more workload on the main developers.

The problem... you need the volunteers to do so :-)
Yes, getting volunteers is the problem. With projects like this, with no immediate gain, the issue of volunteers leaving the team will soon become your primary concern. Lots of projects that started off with a nice set of goals, move off-track for lack of perseverence. There are so many other nice things to spend your time on!

Having said that, you could take one of these two directions: either try the "Slackware from scratch" approach where you develop a master build script and make sure at the end of a Slackware development cycle that all packages still compile if you start from scratch.
Or you take a more granular approach, set up a Jenkins server (or any other type of Continuous Integration server) and build packages around the clock and on top of the full distro, checking the logs for build failures and act on those reports.

The second approach may be better sustainable in the long term.

Eric
 
Old 09-04-2012, 06:43 AM   #14
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 5,259

Rep: Reputation: Disabled
Quote:
Originally Posted by ReaperX7 View Post
could Slackware at the core of the system hypothetically ever be frozen in code and completely rebuilt as an audit of the existing code?
No, Slackware's development simply does not follow this approach. There is no added value.

Eric
 
Old 09-04-2012, 07:55 AM   #15
chemfire
Member
 
Registered: Sep 2012
Posts: 70

Rep: Reputation: Disabled
Alien Bob,

I am not writing this to suggest a getting Slackware or Slackware64 to state where a full source build can be is a good use of Patrick's, the teams' or even those who contribute occasionally's limited resources at this time.

I just wanted to respond to: =
Quote:
Can you give any real-life scenario where this would become a relevant question?
The arguments for why it might one day be necessary that I can see however are (and second is admittedly a hypothetical):
1. It would make porting to other platforms simpler. At the very least would document the order everything needs to be built in, any circular dependencies that exist and how to resolve them. Just that information would be a huge leg up for anyone doing a port.

2. In these days of hardware requiring signed boot loaders and the like its not hard to imagine the TPM of the future requiring any memory region not marked NX be signed or similar nonsense. If that is done at the hardware level it might be a requirement to produce new binaries at some point.
 
  


Reply

Tags
init, slackware from scratch


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Installing Slackware from scratch. glore2002 Slackware 16 09-03-2008 06:44 AM
Installing Slackware 10.2 from scratch dfresh Slackware - Installation 9 11-10-2005 01:44 PM
Slackware from Scratch? dhave Slackware 10 02-07-2005 04:04 PM
ali aladdin v agp stinks :scratch: :scratch: :scratch: Mr Marmmalade Linux - Hardware 1 07-08-2003 05:11 AM


All times are GMT -5. The time now is 08:01 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration