SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have some scripts which call up upgradepkg, log anything on stderr to a log file and send anything on stdout to /dev/null. The recent upgrade to grep-2.26 in slackware-current and slackware64-current breaks these scripts.
In particular, if a package test-pkg-0.0.1-noarch-1 is upgraded with the following:
the new package will be installed but the old one will not be removed. /var/log/packages will show two packages installed, namely (taking the date of an actual test of mine):
Oh FFS! I wonder how many scripts they're gonna break because of premature SIGPIPEs with this "improvement". pkgtools aren't the only place I've seen grep used in this manner.
BTW, from a quick test on the command-line command | grep -v wibble | cat seems to be a work-around (as long as you don't care about capturing the exit code from the pipe!) as the output of grep will be the next stage of the pipe and not /dev/null.
This is an idiotic change that the GNU grep devs clearly haven't thought through and needs to be reverted upstream.
Um, wait. Isn't the dev screwing around with a pipe thinking it should be doing a "grep -m 1"?
Quote:
Originally Posted by grep committer
This sped up 'seq 10000000000 | grep . >/dev/null' by a factor of 380,000 on my platform (Fedora 23, x86-64, AMD Phenom II X4 910e, en_US.UTF-8 locale).
That is what he justifies this with?
There is already a switch for it. It is better to do the job as originally intended and let the script writer optimize it if it makes sense rather than screwing around with how the pipe is interpreted.
In brief, with grep-2.26, building vim leaves a number of gcc's intermediate files in /tmp. With grep 2.25, or patched grep-2.26, the garbage files are no longer left in /tmp.
Well, looks like they've done some work on this and closed the bug report.
Quote:
grep by default now reads all of standard input if it is a pipe, even if this cannot affect grep's output or exit status. This works better with nonportable scripts that run "PROGRAM | grep PATTERN+ >/dev/null" where PROGRAM dies when writing into a broken pipe
[bug introduced in grep-2.26]
... though I'm not sure what is "nonportable" about using program | grep 'something' in a script. Anyway, that aside, they deserve thanks for fixing it.
This kind of comment worries me more than a little.
It seems that there are people in the 'upstream world' making changes to the way UNIX and Linux tools have worked for decades and when they break something, it's because we've been doing it wrong all this time.
Sat Nov 19 22:45:38 UTC 2016
a/grep-2.26-x86_64-2.txz: Rebuilt.
Reverted a speedup patch that is causing regressions when output is directed
to /dev/null. Thanks to SeB.
Sat Nov 19 22:45:38 UTC 2016
a/grep-2.26-x86_64-2.txz: Rebuilt.
Reverted a speedup patch that is causing regressions when output is directed
to /dev/null. Thanks to SeB.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.