LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices

Reply
 
Search this Thread
Old 04-21-2013, 12:39 PM   #76
ruario
Senior Member
 
Registered: Jan 2011
Location: Oslo, Norway
Distribution: Slackware
Posts: 1,806

Rep: Reputation: 810Reputation: 810Reputation: 810Reputation: 810Reputation: 810Reputation: 810Reputation: 810

Quote:
Originally Posted by konsolebox View Post
I think the concept of automatic conversion of the scripts is proven already.
Umm, this is not the tricky bit. Wake me up when you successfully build them all.
 
4 members found this post helpful.
Old 04-21-2013, 12:57 PM   #77
ponce
Senior Member
 
Registered: Aug 2004
Location: Pisa, Italy
Distribution: Slackware
Posts: 2,410

Rep: Reputation: 853Reputation: 853Reputation: 853Reputation: 853Reputation: 853Reputation: 853Reputation: 853
...and tested them all, for multiple archs, like he asked for.
I'm not even thinking about maintenability/upgrading (for now).

Zzzzz...

Last edited by ponce; 04-21-2013 at 01:07 PM.
 
4 members found this post helpful.
Old 04-21-2013, 02:46 PM   #78
T3slider
Senior Member
 
Registered: Jul 2007
Distribution: Slackware64-14.0
Posts: 2,244

Rep: Reputation: 622Reputation: 622Reputation: 622Reputation: 622Reputation: 622Reputation: 622
Honestly, I don't understand why you are avoiding Alien Bob's work which does have a chance to make it into Slackware one day. He has this file (/etc/slackbuild/machine.conf):
Code:
# TrimSlice with Tegra2
export ARCH="armv7hl"
export SLKCFLAGS="-O2 -march=armv7-a -mfpu=vfpv3-d16"
export LIBDIRSUFFIX=""
Which is centrally located and can be changed -- this way you would be able to add your own SLKCFLAGS in one location which is (or will be, in the future) obeyed by most of the SlackBuilds. In the individual SlackBuilds (this is not widespread in his local scripts yet but I have to assume it will be in the future):
Code:
if [ -e $CWD/machine.conf ]; then
  . $CWD/machine.conf ]
elif [ -e /etc/slackbuild/machine.conf ]; then
  . /etc/slackbuild/machine.conf ]
else
  # Automatically determine the architecture we're building on:
  if [ -z "$ARCH" ]; then
    case "$( uname -m )" in
      i?86) export ARCH=i486 ;;
      arm*) export ARCH=arm ;;
      # Unless $ARCH is already set, use uname -m for all other archs:
         *) export ARCH=$( uname -m ) ;;
    esac
  fi
  # Set CFLAGS/CXXFLAGS and LIBDIRSUFFIX:
  if [ "$ARCH" = "i486" ]; then
    SLKCFLAGS="-O2 -march=i486 -mtune=i686"
    LIBDIRSUFFIX=""
  elif [ "$ARCH" = "s390" ]; then
    SLKCFLAGS="-O2"
    LIBDIRSUFFIX=""
  elif [ "$ARCH" = "x86_64" ]; then
    SLKCFLAGS="-O2 -fPIC"
    LIBDIRSUFFIX="64"
  else
    SLKCFLAGS="-O2"
    LIBDIRSUFFIX=""
  fi
fi
If this ever gets adopted by Slackware to more easily handle multiple ports (and of course this isn't guaranteed, but at least Pat has seen it, according to this post), then you can just adjust /etc/slackbuild/machine.conf (or use a local machine.conf) and you don't have to do this potentially error-inducing automatic script editing. Of course it means you would have to wait until Alien Bob has modified more SlackBuilds to support this new format before starting your project, but at least it adds an element of future-proofing to your effort. It still wouldn't guarantee that all packages would actually build at any given time, but it would mean that you could focus on that instead of having to worry about both modifying SlackBuilds repeatedly for each architecture and making sure everything actually builds.

As a side note, to see if there are any SlackBuilds that need manual attention after the above modifications are more widespread, your script would be reduced to:
Code:
$ find . -type f -name "*.SlackBuild" -exec grep -L "machine\.conf" {} \;
This is more maintainable, more future-proof, and less error-prone...
 
1 members found this post helpful.
Old 04-21-2013, 03:10 PM   #79
joncr
Member
 
Registered: Jun 2012
Posts: 73

Rep: Reputation: Disabled
Seems to me it is much more useful to spend time getting a Slackware release that can run on multiple chip generations rather than spending time grinding out a suitcase-full of so-called optimized packages.

Here's the thing about optimizations: For almost all of us, our systems spend almost all of their time sitting there not doing much of anything. I don't want to make the effort to optimize that. There'd be no payoff. If I started spending a lot of my time in some seriously processor-intensive task, and if I found, subjectively, that it was a wee bit slow, I might decide to rebuild that specific package. But, it would take a lot to get me to do that. I certainly wouldn't do it just to produce better benchmark numbers. If I can't notice the result of an optimization apart from a benchmark test, it's pointless.

Last edited by joncr; 04-21-2013 at 03:11 PM.
 
3 members found this post helpful.
Old 04-21-2013, 05:50 PM   #80
dugan
Senior Member
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 4,571

Rep: Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394Reputation: 1394
Quote:
Originally Posted by joncr View Post
If I can't notice the result of an optimization apart from a benchmark test, it's pointless.
And if you do "notice" the result but never do a benchmark test, you're fooling yourself into seeing what you want to see.

Last edited by dugan; 04-21-2013 at 05:53 PM.
 
1 members found this post helpful.
Old 04-21-2013, 06:03 PM   #81
joncr
Member
 
Registered: Jun 2012
Posts: 73

Rep: Reputation: Disabled
Quote:
Originally Posted by dugan View Post
And if you do "notice" the result but never do a benchmark test, you're fooling yourself into seeing what you want to see.
That's why I never bother with tweaks to do things like speed up boot time. Once I get things set up, I don't need to reboot, so I don't. The fastest reboot is the one that doesn't happen.

Every so often, I work my way through a large batch of photo files. Processing an image file can be pretty processor intensive. If I could reduce the time it takes to deal with, say, 300 files, by 50 percent, that might be worth rebuilding something. Even there, though, most of the time involved is down to me staring at the screen trying to decide what I like and don't like. In the end, I'd just use "optimized" software to give me more time to play with the images.
 
Old 04-21-2013, 06:51 PM   #82
jtsn
Member
 
Registered: Sep 2011
Location: Europe
Distribution: Slackware
Posts: 803

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
Quote:
Originally Posted by joncr View Post
Here's the thing about optimizations: For almost all of us, our systems spend almost all of their time sitting there not doing much of anything. I don't want to make the effort to optimize that. There'd be no payoff.
It's up to upstream developers to optimize performance-sensitive code paths - because they know them. Using the shotgun approach by throwing CFLAGS at anything only make some things faster and some things slower (-funroll-loops) - it doesn't make sense.

If your setup is too slow for your use-case, then optimizing isn't likely to change that. If your hardware is too slow for playing 1080p video, then GCC -funroll-loops isn't gonna help you. But if you replace the Atom CPU with a Core i3, then Slackware's stock MPlayer or Alien's VLC packages are just fine. That's the truth behind this whole cargo cult.
 
Old 04-21-2013, 08:33 PM   #83
konsolebox
Senior Member
 
Registered: Oct 2005
Distribution: Gentoo, Slackware, LFS
Posts: 2,245
Blog Entries: 15

Original Poster
Rep: Reputation: 233Reputation: 233Reputation: 233
Quote:
Originally Posted by dugan View Post
Looking forward to your report on which packages won't build.
I hope you'd be patient as that would take time. It's not something we have to rush. Also, I already mentioned earlier that if a package won't compile with new added march parameter then that package could just remain unoptimized.

Now about your previous post.

Quote:
Originally Posted by dugan View Post
Most of the SlackBuilds in /source have not been
confirmed to work in the current Slackware release. This is exactly where I
expect you to stumble.
Then that would just mean that binary and the
source has inconsistency as a pair, and either one of them does not belong to
that release. That could also mean that the binary was built before an upgrade
in either of its dependencies and could risk having broken links to the
libraries. What I mean is that if the source does not build with current
packages but was successfully built with previous ones then it could also
possibly mean that the binary that it built is no longer compatible with those
packages. So this would be a general problem of the release itself.

Of course there would be exceptions that the package would still work even
after the dependencies were upgraded/downgraded, but still it would be better
that it was verified, and one of it is through building the source with what
is currently there. We could check the consistency of dynamic linking with
libraries as well of course, but sometimes it's also about other depended
stuffs not just libraries.

Quote:
Originally Posted by T3slider View Post
Honestly, I don't understand why you are avoiding Alien Bob's work which does have a chance to make it into Slackware one day.:
If it does make it to the next release then things could be adopted. What I'm really just after is a proof that an extension of optimized packages could be made use. I'm just looking now on the stable snapshot of 14.0. I also have my own different ideas about it, and so decided not to look at any other script or solution.

Last edited by konsolebox; 04-21-2013 at 08:38 PM.
 
Old 04-21-2013, 08:44 PM   #84
konsolebox
Senior Member
 
Registered: Oct 2005
Distribution: Gentoo, Slackware, LFS
Posts: 2,245
Blog Entries: 15

Original Poster
Rep: Reputation: 233Reputation: 233Reputation: 233
Quote:
Originally Posted by jtsn View Post
It's up to upstream developers to optimize performance-sensitive code paths - because they know them. Using the shotgun approach by throwing CFLAGS at anything only make some things faster and some things slower (-funroll-loops) - it doesn't make sense.
On generic optimizations probably, but you can't do that with hardware-specific implementations unless you do ASM. And even just with that, you can't do it without sacrificing readability most of the time.
Quote:
Originally Posted by ponce View Post
...and tested them all, for multiple archs, like he asked for.
I don't see how necessary it is to test with all archs to my prove my point? Probably those who haven't had much experience with compiling packages that target different architectures would say that.
Quote:
I'm not even thinking about maintenability/upgrading (for now).
I wonder if you really had the idea that it's only about the release and not really about the package upgrades? Yet again, it could happen if the Slackware team decides to have optimized packages on the upgrades as well.

Also like I said if it's decided the converted scripts need to be converted only once and be used as a general form afterwards.

Last edited by konsolebox; 04-21-2013 at 08:51 PM.
 
Old 04-21-2013, 08:51 PM   #85
chemfire
Member
 
Registered: Sep 2012
Posts: 69

Rep: Reputation: Disabled
Quote:
Originally Posted by joncr View Post
Every so often, I work my way through a large batch of photo files. Processing an image file can be pretty processor intensive. If I could reduce the time it takes to deal with, say, 300 files, by 50 percent, that might be worth rebuilding something. Even there, though, most of the time involved is down to me staring at the screen trying to decide what I like and don't like. In the end, I'd just use "optimized" software to give me more time to play with the images.
The thing is you really are not likely to see all that much in the way of gains there. I really recommend people find some smallish C code they understand, call GCC with -S and different mixes of optimization flags. Compare the outputs.

Its not easy to count how many cycles something is going to be on modern Intel hardware; mostly you can't due to pipeline stalls, branch prediction, and out of order execution but you can still get some idea of what is better or worse based on the instruction latencies. Spend an afternoon on this exercise once and you will find for the most part you going to get trades that amount to 6 of one for a half dozen of the other. Then you can understand the silliness here they pay off just does not come anywhere near the cost in terms of all the additional testing that would be necessary to support every micro optimization on every CPU step out there.

Look if you are running a photo blog and you have a box dedicated to doing nothing but processing upload files with imagemagic 24x7 yes fine; monkey with flags on that; but for general packages that are not support hot code paths its not worth it.

Really you'd probably get the most bang for your buck doing an CPU specific build of the kernel and glibc and stopping there; in almost every other situation (and even this isn't really worth doing).
 
1 members found this post helpful.
Old 04-21-2013, 08:53 PM   #86
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,443

Rep: Reputation: 409Reputation: 409Reputation: 409Reputation: 409Reputation: 409
Quote:
Originally Posted by konsolebox View Post
Then that would just mean that binary and the
source has inconsistency as a pair, and either one of them does not belong to
that release. That could also mean that the binary was built before an upgrade
in either of its dependencies and could risk having broken links to the
libraries. What I mean is that if the source does not build with current
packages but was successfully built with previous ones then it could also
possibly mean that the binary that it built is no longer compatible with those
packages. So this would be a general problem of the release itself.
Or it could be something that certainly appears to be beyond your ability to understand. You really do need to read this thread since it covers this very issue.
 
1 members found this post helpful.
Old 04-21-2013, 09:20 PM   #87
konsolebox
Senior Member
 
Registered: Oct 2005
Distribution: Gentoo, Slackware, LFS
Posts: 2,245
Blog Entries: 15

Original Poster
Rep: Reputation: 233Reputation: 233Reputation: 233
Quote:
Originally Posted by Richard Cranium View Post
Or it could be something that certainly appears to be beyond your ability to understand. You really do need to read this thread since it covers this very issue.
I went to the thread but can't see how the topic is really related to this as it's more on compiling everything from scratch, and didn't see anything that could disprove my point. I just made a quick read, but can you refer to me an idea there that actually disproves it?

Just to tell you I've tried LFS, known the concepts of toolchains, system build stages, binary conflicts, way too long already since 3.6-4.0 gcc upgrade.
 
Old 04-21-2013, 09:30 PM   #88
konsolebox
Senior Member
 
Registered: Oct 2005
Distribution: Gentoo, Slackware, LFS
Posts: 2,245
Blog Entries: 15

Original Poster
Rep: Reputation: 233Reputation: 233Reputation: 233
Quote:
Originally Posted by chemfire View Post
Its not easy to count how many cycles something is going to be on modern Intel hardware;
I have doubts on that, yet again you can you feel the actual output like how I see the number of FPS my game could have, which is especially crucial with emulating softwares. Note that this is not only about the software itself but on the libraries it depend upon as well, especially graphics libraries.

And can't you not measure the amount of time that a batch of files could be processed in average, and if not the amount of time needed to transcode an audio/video, and even those that rely on input especially HD which needs immediate processing.

Quote:
Really you'd probably get the most bang for your buck doing an CPU specific build of the kernel and glibc and stopping there; in almost every other situation (and even this isn't really worth doing).
Why would someone need to build critical and heavy libraries like glibc on their own when they could be distributed as a package? And things about media files is not really even about glibc.
 
Old 04-21-2013, 11:16 PM   #89
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 755

Rep: Reputation: 226Reputation: 226Reputation: 226
Quote:
Originally Posted by konsolebox View Post
What I mean is that if the source does not build with current packages but was successfully built with previous ones then it could also possibly mean that the binary that it built is no longer compatible with those packages.
No.

Time and time again we have seen GCC becoming more compliant with the C/C++ standards and as it does old code stops compiling.

This does not mean the binaries are no longer compatible.
 
Old 04-21-2013, 11:24 PM   #90
T3slider
Senior Member
 
Registered: Jul 2007
Distribution: Slackware64-14.0
Posts: 2,244

Rep: Reputation: 622Reputation: 622Reputation: 622Reputation: 622Reputation: 622Reputation: 622
Quote:
Originally Posted by konsolebox View Post
Then that would just mean that binary and the
source has inconsistency as a pair, and either one of them does not belong to
that release. That could also mean that the binary was built before an upgrade
in either of its dependencies and could risk having broken links to the
libraries. What I mean is that if the source does not build with current
packages but was successfully built with previous ones then it could also
possibly mean that the binary that it built is no longer compatible with those
packages. So this would be a general problem of the release itself.
These are *ROUGH* numbers since I am just looking at /var/log/packages/ on my installed system instead of verifying everything on a mirror. There are 92 packages that haven't been recompiled since at least 2009. Another 54 haven't been recompiled since at least 2010. Another 179 haven't been recompiled since 2011 (those 179 packages were compiled during 13.37's development cycle but were not recompiled during 14.0's). Newer versions of glibc intentionally maintain binary compatibility with binaries built using older versions of glibc, and minor library (dependency) version bumps are supposed to maintain ABI compatibility, so unless there is a major version bump to a dependency it shouldn't need to be recompiled (and sometimes even with a major bump it may still work).

Again -- as I stated before -- packages are only recompiled for one of three reasons:
1) A security vulnerability has been identified, and the package is patched
2) The package is upgraded to a newer version
3) The package is broken because of an update to another package

Unless it is already known that a certain package will break (if you upgrade certain components of KDE, then other packages will need to be recompiled, for example -- and obviously Pat has a lot of experience guessing what packages will need to be recompiled when major components are upgraded), then the package will probably not be recompiled until someone notices it is broken during -current's development cycle (whether this is a member of the Slackware team or a -current user -- search the forums and you will see plenty of -current users reporting broken packages after other upgrades). Although packages do often break because of other package upgrades, in *MANY* cases they will work just fine. And by the time the package does break -- and this could be years after the package was originally built -- the sources used originally may not compile because of other system upgrades. Again, because Slackware is never recompiled for each release, it isn't the most suitable distro for this project. But, if you really want to spend your time futzing around, go ahead...but it really does seem that you're not getting what people are saying.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Optimized distros for my particular computer setup? Rick069 Linux - Software 3 09-30-2007 11:39 AM
Custom slackware CD with optimized packs sl4ckw4re Slackware 5 07-13-2006 04:03 PM
Any Debian-based distros that are optimized for 686? lefty.crupps Debian 8 06-16-2005 08:43 AM
Is Slackware booting already optimized? dhave Slackware 1 11-21-2004 12:09 PM
Real-life speed diference between 386,586 and 686 optimized distros Embedded Linux - Distributions 4 06-19-2003 12:21 PM


All times are GMT -5. The time now is 08:11 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration