LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 06-28-2015, 11:18 AM   #31
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106

Quote:
Originally Posted by nobodino View Post
I didn't thought trying to build slackware from sources would create so much animosity beyond members.
My goal was just a personal challenge, and if possible share my experience:
- when you're well over 50's, are you still smart enough to do anything valuable?
- after 20 years as a slackware user, do I understand it sufficiently?
- what are the fundamental programs which make slackware able to boot, the reason I trimmed it to 25 packages?
By doing it, I just wanted to help.
By finding packages that didn't build anymore, I only suggest a solution when possible, I don't require anything.
The purpose of this thread was not to talk about politics, religion, philosophy, or the way slackware should be done.
Just consider it as an educational exercise.
Just consider it as a toolbox.
No need to talk about hidden message in it, LFS was just a tool, nothing else.
It doesn't aim at proving anything, except it was feasible.
Those able to decide are smart enough to consider it valuable or not.
The others can use it or not.
Finally I don't care.
I was not dismissing your work. In fact, this is what I wrote in my earlier post:
Quote:
I do not dismiss, underestimate or underappreciate the work done by people who build Slackware from scratch - it is a fun experiment, you will learn a lot from Slackware internals, compilers and source code patching
 
1 members found this post helpful.
Old 06-29-2015, 12:26 AM   #32
Didier Spaier
LQ Addict
 
Registered: Nov 2008
Location: Paris, France
Distribution: Slint64-15.0
Posts: 11,063

Rep: Reputation: Disabled
Disclaimer: please don't see in what follows anything politically motivated, let alone an intent to trigger a flame war (as we've had enough of that recently for futile motives).

I just came across Reproducible Builds get funded by the Core Infrastructure Initiative that in turn leaded me to ReproducibleBuilds this morning.
 
1 members found this post helpful.
Old 06-29-2015, 03:05 AM   #33
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
To be honest, I loved the work done. It's not only educational into how packages fit together, but equally it's educational in the aspect, someone can learn how to rebuild packages they need, and know if a package is reproducible with or without the need of patches, or, as I found out with hal, extra work needs to be done to rebuild things, maybe even work that's unnecessary or detrimental to other system packages. Being reproductive with packages can extend the life of packages or see if packages are not truly reproducible and need to be truly retired.

Case in point, my hal work required an after-install massive edit of glib to get hal reproducible, and in the end, it proved hal was truly done. Hal, nor any package, shouldn't require editing another package's headers to compile. It could be used as a binary package, but for how long? Eventually, it was going to hit a wall, but when is anyone's guess.

I don't get why the strong animosity towards having a system that can effective clone itself without a bootstrap session exists. A self-producing system proves the concept of stability between packages, fluid interoperability between packages and dependencies, and allows for higher quality controls in packages as a distribution progresses. Slackware 14.1 should be able to take the entire source library of everything used in 14.1 and rebuilt it all without a hiccup.

The GCC package, if built for it, does exactly this. It bootstraps, then builds itself three times over itself to check for any inconsistencies. Slackware doesn't require a bootstrap, nor should, but it should be able to correctly rebuilt itself without a flaw to check for inconsistent packages.

Besides, we have had a few packages get reported to have the wrong shared libraries linked in Slackware that are dependencies of other packages, and while this isn't the fault of anyone, it is a chink in the armor of a system. It doesn't happen often, and usually gets fixed quickly, or the person reporting it simply rebuilds and moves on, but then again, it shouldn't be happening at all on some level.
 
Old 06-29-2015, 03:30 AM   #34
Didier Spaier
LQ Addict
 
Registered: Nov 2008
Location: Paris, France
Distribution: Slint64-15.0
Posts: 11,063

Rep: Reputation: Disabled
@ReaperX7: Maybe you could read this again:
Quote:
Originally Posted by nobodino View Post
The purpose of this thread was not to talk about politics, religion, philosophy, or the way slackware should be done.
 
Old 06-29-2015, 04:48 AM   #35
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106
Quote:
Originally Posted by ReaperX7 View Post
Slackware doesn't require a bootstrap, nor should, but it should be able to correctly rebuilt itself without a flaw to check for inconsistent packages.
I think you should stop here and now with opiniating what Slackware should or should not do. It is not your call.
The design philosophy behind this OS does not fit with your beliefs, this much has been established. Let's just leave it at that.

Any other distro building philosophy should not have your persistent references to Slackware attached to it.
You are free to clone Slackware's work and run off with it.
 
4 members found this post helpful.
Old 06-29-2015, 05:09 AM   #36
cynwulf
Senior Member
 
Registered: Apr 2005
Posts: 2,727

Rep: Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367Reputation: 2367
Quote:
Originally Posted by ReaperX7 View Post
I don't get why the strong animosity towards having a system that can effective clone itself without a bootstrap session exists. A self-producing system proves the concept of stability between packages, fluid interoperability between packages and dependencies, and allows for higher quality controls in packages as a distribution progresses.
It may be an idea to install and use such an operating system rather than suggesting that Slackware become such an operating system, just because in your opinion it's the correct way to do things. And by the way no 'animosity' here: I just genuinely think you'd be better suited with a different OS, having read a few of your posts in various threads.

Quote:
Originally Posted by ReaperX7 View Post
Slackware 14.1 should be able to take the entire source library of everything used in 14.1 and rebuilt it all without a hiccup.
Gentoo should not rebuild everything from source.
Debian should stop doing dependency resolution.
Arch should put out a stable release.

etc, etc, etc...

Seriously, just find the OS which matches your individual requirements/expectations/standards.
 
11 members found this post helpful.
Old 06-29-2015, 06:22 AM   #37
kikinovak
MLED Founder
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: CentOS, OpenSUSE
Posts: 3,453

Rep: Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154
Quote:
Originally Posted by cynwulf View Post
Arch should put out a stable release.
<off_topic>That would be nice actually.</off_topic>
 
3 members found this post helpful.
Old 06-29-2015, 11:37 AM   #38
55020
Senior Member
 
Registered: Sep 2009
Location: Yorks. W.R. 167397
Distribution: Slackware
Posts: 1,307
Blog Entries: 4

Rep: Reputation: Disabled
Quote:
Originally Posted by ReaperX7 View Post
To be honest, I loved the work done. It's not only educational into how packages fit together, but equally it's educational in the aspect, someone can learn how to rebuild packages they need, and know if a package is reproducible with or without the need of patches, or, as I found out with hal, extra work needs to be done to rebuild things, maybe even work that's unnecessary or detrimental to other system packages. Being reproductive with packages can extend the life of packages or see if packages are not truly reproducible and need to be truly retired.

Case in point, my hal work required an after-install massive edit of glib to get hal reproducible, and in the end, it proved hal was truly done. Hal, nor any package, shouldn't require editing another package's headers to compile. It could be used as a binary package, but for how long? Eventually, it was going to hit a wall, but when is anyone's guess.
Sorry, but I'm not sure that you understand the meaning of reproducible. It doesn't mean "can you build last year's source with this year's toolchain" or "is it self-hosting". It means that, if you build the same package twice with the same toolchain then the two packages are bit-identical. For example, every Slackware package is a tar archive, and every tar archive contains time-stamps, and therefore two packages will always be different unless the time-stamps are fraudulent. Time-stamps are only one example; there are lots of other sources of nondeterminism, as documented by the Debian reproducibility project. If you managed a close approximation to this with hal, you were lucky, because most packages need an experimental toolchain and/or upstream patches.

The reason why other distributions are seeking reproducibility is trust. In a post-Snowden world, we want to know that Red Hat's binaries or Suse's build farm have not been subverted. One way to do that is for end users to be able to verify that, if they want to, by building from source and getting a bit-identical result.

But with Slackware, we have a much simpler trust model. Its packages are built and signed by one man in one place, and from observation Mr Volkerding would seem to be one of the least likely people on the planet to sell out or be compromised. And the older the packages are, the more trustworthy they are. How could the NSA go back in time to pwn elvis-2.2_0-i486-2.txz, which was built and signed on 22-Feb-2004? So a full rebuild for every release is NOT your friend.

Quote:
Originally Posted by ReaperX7 View Post
Slackware 14.1 should be able to take the entire source library of everything used in 14.1 and rebuilt it all without a hiccup.
Why? That process would verify the *SlackBuilds*, not the *Packages*. The SlackBuilds are important for transparency and many other purposes, but it's the Packages that are Slackware's product. Fighting bit-rot in a SlackBuild doesn't produce a benefit in the corresponding package, until you need a new package.

Slackware Philosophy Clause 1337 reads: Never do something, when doing nothing is technically superior. [citation needed]

You need to understand that a rebuilt package is not better than an old package. The old package has had tens of thousands of users installing and running it for years, so it's less likely (for example) that a bit of ram got flipped by a cosmic ray during the build. For the rebuilt package, we don't know yet. The rebuilt package is on probation.

This is a second reason why Patrick's old elvis package from 2004 is better than an elvis package knocked up with gcc-4.9 on my laptop today.

Quote:
Originally Posted by ReaperX7 View Post
The GCC package, if built for it, does exactly this. It bootstraps, then builds itself three times over itself to check for any inconsistencies. Slackware doesn't require a bootstrap, nor should, but it should be able to correctly rebuilt itself without a flaw to check for inconsistent packages.
Why? Because self-hosting is cool? Yes, it is very cool, I did it once myself, but how would it make the packages technically better?

Currently there are NO (Zero) distributions that are 100% reproducible. Uh, why would the Linux Foundation need to make a $200,000 grant to work on reproducibility, if reproducibility was a solved problem?

Quote:
Originally Posted by ReaperX7 View Post
Besides, we have had a few packages get reported to have the wrong shared libraries linked in Slackware that are dependencies of other packages, and while this isn't the fault of anyone, it is a chink in the armor of a system. It doesn't happen often, and usually gets fixed quickly, or the person reporting it simply rebuilds and moves on, but then again, it shouldn't be happening at all on some level.
It only happens on -current. That is the methodology of -current, it's a succession of forward steps that are not necessarily complete or self-consistent. When people say -current is not a rolling release, this is exactly what they mean.
 
4 members found this post helpful.
Old 06-29-2015, 12:33 PM   #39
a4z
Senior Member
 
Registered: Feb 2009
Posts: 1,727

Rep: Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742
Quote:
Originally Posted by 55020 View Post

You need to understand that a rebuilt package is not better than an old package. The old package has had tens of thousands of users installing and running it for years, so it's less likely (for example) that a bit of ram got flipped by a cosmic ray during the build. For the rebuilt package, we don't know yet. The rebuilt package is on probation.
and of course you have a prove for this opinion?

otherwise I say the combination of compiled packages in Slackware is so unique that it does not have more installations and running time than from other combinations, which of course I can not prove but sounds logical to me.
 
Old 06-29-2015, 01:18 PM   #40
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by a4z View Post
and of course you have a prove for this opinion?
Well, it only has centuries of engineering best practices behind it.

When something doesn't change, it retains the properties it had, including those properties which have been tested and verified.

When something changes, you can no longer take anything you used to know about it for granted. It is a big question mark. Sometimes the new device is broken, sometimes it is not, but you will not know one way or the other until it has been exhaustively tested. This is as true of bridges as software.

If Slackware seems unusual among modern distributions for adhering to traditional engineering best practices, that is a testament to the instability and unsoundness of other distributions. Just because bad engineering has become popular doesn't mean we should join the fad.
 
9 members found this post helpful.
Old 06-29-2015, 04:35 PM   #41
NoStressHQ
Member
 
Registered: Apr 2010
Location: Geneva - Switzerland ( Bordeaux - France / Montreal - QC - Canada)
Distribution: Slackware 14.2 - 32/64bit
Posts: 609

Rep: Reputation: 221Reputation: 221Reputation: 221
Quote:
Originally Posted by ttk View Post
When something doesn't change, it retains the properties it had, including those properties which have been tested and verified.
Sorry, it's not about Slackware, politic, religion or anything, it's just about that point I can't understand, so could you please explain that ?

The point is I'm a developer since ~1990, and one thing I developed are Games or "demos" which, for usage simplicity would not require ANY third party dependency, understand we used to prefer static linking for long time.

When I started to use Linux, I was confronted to one problem (the one professional GameDevs have) is the closed source paradigm. This can be debated but simply put, a game can worth millions of dollars/euro in development, meaning several month/years for quite a huge team which work more than the salary, often 50-90 hours/week, yes we are passionate and/or "overused"/abused by industry. So we can imagine that investment in one game can legitimate the closed source for IP and security (for online games). Anyway, even if that could be discussed, it's not a individual choice, it's the whole industry "trend", so you apply to that, or you choose another work.

Anyway, willing to ease game development on linux I started to work on a framework and was very quickly confronted to the general use of shared libraries in the linux realm, and different linux distros using different version. Quickly, as a software provider this leads to a huge pain, even a physically impossibility to do it. So the easy solution in that case is to use static linking which leads to huge binaries, but that's what almost all commercial/closed source software developer must do in order to be able to provide "one package" that works almost on "any distro". Of course that rely on keeping the OS ABI consistent (the kernel), which is a quite safe assumption, few ABI changes, and anyway, existing ABI should stay consistent, new system calls being just "appended" to the current ABI, this, in most case shouldn't break existing ABI (technically, it's about some numerical parameters calling a system interrupt vector, so we ca say it's "quite safe").

That being said, as everything now use shared libraries, a lot of shared libraries DOES NOT provide a constant/fixed ABI (it depends a lot of the version of GCC/linker you use, and C++ function signatures can be changed for one version to another, in fact it relies on keeping/using the same compiler to keep compatibility).

So the point is: if a software package (B) depends on another package which provide the shared libraries dependencies (A), and that package A (shared libraries) are rebuilt with a new version of the compiler/linker that breaks the library ABI (even if the version of the library is the same, moreover if the version change), how do you know that package B is still working if you don't exhaustively test everything ?

What I don't understand is how could you prevent an exhaustive test anyway ? It seems to me that you can't avoid that. Maybe I don't understand your explanation, and was mislead somewhere. For now, I can't agree with your assumptions, and I'm talking from a system developer position, not a "hobbyist package maintainer".

Again, it's not about flame, or willing to change Slackware, it's about a point that I truly don't understand and see as "granted" as you seems to see. So I hope that you can explain me where I miss something.

Cheers,

Garry.

PS/ Sorry for my lousy English, I haven't practiced it often for quite some time.

PPS/ TL;DR, I agree with your assumption if packages are statically linked, but I can't agree with shared libraries linking...

Last edited by NoStressHQ; 06-29-2015 at 05:22 PM. Reason: Some fixes.
 
Old 06-29-2015, 06:03 PM   #42
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by NoStressHQ View Post
Sorry, it's not about Slackware, politic, religion or anything, it's just about that point I can't understand, so could you please explain that?

<..useful background snipped..>

That being said, as everything now use shared libraries, a lot of shared libraries DOES NOT provide a constant/fixed ABI (it depends a lot of the version of GCC/linker you use, and C++ function signatures can be changed for one version to another, in fact it relies on keeping/using the same compiler to keep compatibility).

So the point is: if a software package (B) depends on another package which provide the shared libraries dependencies (A), and that package A (shared libraries) are rebuilt with a new version of the compiler/linker that breaks the library ABI (even if the version of the library is the same, moreover if the version change), how do you know that package B is still working if you don't exhaustively test everything?
You appear to have an excellent grasp of the problem. The solution (which is unfortunately not applicable to proprietary third-party multi-platform software) is to continue providing the known-good dependencies as long and as much as possible, to minimize the degree of change, and not unnecessarily recompile packages between releases.

For instance, on this Slackware 14.1 machine, there are many shared libraries dating back to 2008 -- /lib64/libsysfs.so.2.0.1, /usr/lib64/libgthread-1.2.so.0.0.10, etc. They represent seven solid years of both testing and real-world use, which means that if they have problems, we at least know about most of them (and known problems are easier to plan for than unknown problems).

When a package needs a more up-to-date version of the same library which is already a dependency of a legacy package, multiple versions of the same shared library can be provided. For instance, /lib64/libgpm.so.1.19.0 and /lib64/libgpm.so.2.1.0 are both installed on this machine.

This is a much easier and safer approach than trying to find a version of libgpm which is all of: (1) bug-free/correct, and (2) works correctly with the legacy package, and (3) works correctly with the new package.

Similarly, with executables, Slackware ships old, known-good binaries. The /usr/bin/rsh on this machine is from 2008, and was compiled with gcc 4.2.4, while the newest packages were compiled with gcc 4.8.2. Not recompiling executables means they will not be broken by changes in the compiler.

In the case of proprietary third-party software, such as commercial games, the solution is to either do as you have already said (link statically and ship fat binaries), or ship it with shared libraries which your application looks for before looking at the system libraries.

Quote:
What I don't understand is how could you prevent an exhaustive test anyway? It seems to me that you can't avoid that. Maybe I don't understand your explanation, and was mislead somewhere.
No, it seems you understand very well.

On one hand, because some dependencies are changing (such as the kernel), it is important to test a new distribution release as thoroughly as possible.

This is one of the reasons Slackware has "release candidates" prior to a production release, so that people can install the candidate and hammer on it as much and as hard as they can, so they can find bugs and report them back to the Slackware team. The problems are fixed, and another release candidate is released, and it all repeats itself.

Even after such testing, people continue to find problems in even the most robust releases. This is why I stress-test new Slackware releases for months before putting them into production use. I will also often skip releases altogether when an older release continues to serve its needed purpose, because (1) it helps further reduce risk, by introducing change less frequently, and (2) stress-testing represents a considerable investment of time and energy, and it doesn't make sense to expend those resources on every release. My servers used Slackware 12.2, and then 13.1, and now 14.1.

On the other hand, this isn't an entirely black-and-white matter. The risk of introducing unknown bugs increases exponentially with the degree of change -- If the probability of a change not breaking a service is P, then the probability of three changes not breaking a service is P**3, which is to say the probability of the first change not breaking it, and of the second change not breaking it, and of the third change not breaking it. For N changes, the probability is P**N.

What this means in real-world terms is that small changes introduce very small risk of breakage, and larger changes introduce dramatically larger risk of breakage. While testing everything is a good idea, in practice small changes are usually safe. Thus, when testing a new release, it is much less important to test the packages to which little change was introduced.

Quote:
For now, I can't agree with your assumptions, and I'm talking from a system developer position, not a "hobbyist package maintainer".
I completely understand, as a fellow professional systems software developer, and hope this explanation helps a little.

Quote:
PS/ Sorry for my lousy English, I haven't practiced it often for quite some time.
Your English is quite good :-)

Quote:
PPS/ TL;DR, I agree with your assumption if packages are statically linked, but I can't agree with shared libraries linking...
We seem to agree more than disagree, I think.

Last edited by ttk; 06-29-2015 at 06:17 PM. Reason: corrected error .. started the "On the other hand" using the inverse meaning of P, then didn't change it everywhere
 
Old 06-29-2015, 06:24 PM   #43
NoStressHQ
Member
 
Registered: Apr 2010
Location: Geneva - Switzerland ( Bordeaux - France / Montreal - QC - Canada)
Distribution: Slackware 14.2 - 32/64bit
Posts: 609

Rep: Reputation: 221Reputation: 221Reputation: 221
Quote:
Originally Posted by ttk View Post
[...] We seem to agree more than disagree, I think.
Alright, fair explanation, it's true that I missed the multiple version libraries provided in the same "release" (so old libraries are still valid). But as you said it also means that statistically, you can't ensure 100% that everything is solid, as even with extensive "pseudo-random" testing due to large scale user tests, there's some entropy that makes some parts untested because it's not a true random, but human-based (or human-biased).

Well I also understand very well that a systematic test of everything is close to impossible, although we can automate great parts of testing, some tests would require human interaction or validation (ie: if something is "graphical"). So I don't blame Slackware not to do it, as I find that believing everything can be automated is quite an Utopia.

But maybe, a third-party team can do some part of test automation, to help finding the potential breaks introduced by, even a minor change if it's in some core library or toolchain. Well I know it's not easy, I myself have plenty of ideas of several interesting projects on top of Slackware, even started to build my own, but "real life" shows it's a hard job, and difficult to schedule work for that for a lot of people. But if some people are interested in those kind of projects (or admin related project), I'd be glad to participate, give hints, or support quite a lot, but frankly I can't be "front-end" of those projects, also because I'm not a "communication" guy, and a lot of time must be spent on "promoting" and "explaining" things, which I can't do on my side.

Anyway, thanks to the OP for the scout work in this domain, thank you for taking time to answer, and obviously thanks to Pat' and the team for the huge work involved in building Slackware.

Cheers.

Garry.
 
Old 06-29-2015, 06:36 PM   #44
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by NoStressHQ View Post
Alright, fair explanation, it's true that I missed the multiple version libraries provided in the same "release" (so old libraries are still valid). But as you said it also means that statistically, you can't ensure 100% that everything is solid, as even with extensive "pseudo-random" testing due to large scale user tests, there's some entropy that makes some parts untested because it's not a true random, but human-based (or human-biased).
It sounds like you've got it. It is impossible to ensure with 100% absolute certainty that everything will always work perfectly. This is an ideal that can only be approached asymptotically.

The good news is that for most purposes, achievable certainty is "good enough" (even when using less robust technologies than Slackware). It is when shortcomings in system robustness are compounded by multiple instances, as in very large clusters with tens of thousands of interdependent servers, that the most robust technologies available must be treated as undependable.

Quote:
Well I also understand very well that a systematic test of everything is close to impossible, although we can automate great parts of testing, some tests would require human interaction or validation (ie: if something is "graphical"). So I don't blame Slackware not to do it, as I find that believing everything can be automated is quite an Utopia.

But maybe, a third-party team can do some part of test automation, to help finding the potential breaks introduced by, even a minor change if it's in some core library or toolchain. Well I know it's not easy, I myself have plenty of ideas of several interesting projects on top of Slackware, even started to build my own, but "real life" shows it's a hard job, and difficult to schedule work for that for a lot of people. But if some people are interested in those kind of projects (or admin related project), I'd be glad to participate, give hints, or support quite a lot, but frankly I can't be "front-end" of those projects, also because I'm not a "communication" guy, and a lot of time must be spent on "promoting" and "explaining" things, which I can't do on my side.
I completely agree with everything you say here, and you put your finger on a project I would love to tackle someday, of writing a unit test framework for Slackware as a whole. There is already a project called the Linux Testing Project, which is a good start, but it is far from complete. Also, it doesn't work under Slackware, and making it work requires more attention than I'm able to give it.
 
Old 06-30-2015, 12:02 AM   #45
Arcosanti
Member
 
Registered: Apr 2004
Location: Mesa, AZ USA
Distribution: Slackware 14.1 kernel 4.1.13 gcc 4.8.2
Posts: 246

Rep: Reputation: 22
My biggest beef is with GCC itself. It seems like every time a new version of the compiler comes out, the language syntax changes slightly. Which then forces everyone to spend time changing their software projects to conform to the new compiler instead of focusing on improving their software. What ever happened to following language standards and changing them once every ten years?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Building SQLite-3.8.8 on Slackware-14.1 rshepard Slackware 6 01-20-2015 06:12 PM
Building xbmc on Slackware 14 Woodsman Slackware 1 09-10-2012 10:10 PM
android building and Slackware {davros} Slackware 6 07-21-2011 01:46 AM
Cross-Building Slackware as a whole? gargamel Slackware - Installation 2 05-09-2004 06:31 PM
Building a Slackware PC Odin_of_Asgard Linux - Hardware 1 03-13-2004 11:01 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 02:45 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration