ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Neither of the following commands are working and I don't know why. Any ideas, anybody?
Code:
stuart@stuart:~/$cat showthread.php.html | grep -o "http://www.megaupload.com/[^\"< ]*" | uniq | head -n3 | plowdown -
cannot stat '-': No such file or directory
The second command looks wrong, in a couple ways. Post some sample lines from Musiclist.txt, and explain what you want the command to do, and we can help you
For the first one, I don't know what "plowdown" is but I suspect it doesn't understand "-" to mean standard input. Try redirecting the output from the rest of the pipeline into a file, then running plowdown on that. i.e.,
Code:
long | list | of | stuff | head -n 3 > /tmp/somefile.txt
plowdown /tmp/somefile.txt
For your second command (though I'm not exactly sure what you were doing with the awk command you had), try this:
Code:
awk '{gsub(".*/|\\.mp3$","");print}' file
it will read a file full of lines like:
Code:
/mnt/sda1/Music/Belly/06 - The Bees.mp3
and get rid of the bold parts.
Also note that I'm not sure how in your first post, the two commands you're running are supposed to be tied together... I'm thinking you want this awk command (in a pipeline, without the filename) to come after the `head -n3` but before the `plowdown`, so my apology if this isn't right - and if not, please explain some more.
The $1 isn't doing what you want because it needs to run from inside a script. Normally you'd use it when you're putting your bash into a separate file with a "#!/bin/bash" line at the top, but if you really want to use it on the command line you can do things like this:
Code:
$ bash -c 'echo $2' unused a b c
b
GrapefruiTgirl's awk script is a better way to do it, rather than rely on having exactly 6 slashes. However, you could make the original work like this:
The two commands are linked in my mind, not because i wish to use them in conjunction with one another, but because in both cases I want use the piped output on the last command in the chain.
The $1 isn't doing what you want because it needs to run from inside a script. Normally you'd use it when you're putting your bash into a separate file with a "#!/bin/bash" line at the top, but if you really want to use it on the command line you can do things like this:
Code:
$ bash -c 'echo $2' unused a b c
b
Ah! This explains things somewhat. This is why pp's have worked for me within functions and xargs only - this is the answer to my question. So I could rework the commands with a 'bash -c'prefix, or there is some better way...
Using the -0 option to xargs can solve some (or all - it works for me) of those punctuation problems, but maybe not extreme cases unless the input is "cleaned", and you don't want to do that because then the filenames may be wrong. Here's an example of "still works for me":
Well, without xargs, then just pipe the stuff into a while loop instead. Process each line in the while loop, and run plowdown on each item.
It seems you have all the necessary pieces here to do what you need, it's a matter of putting them together the right way.
At this point in time, I don't know precisely what command(s) you are running or what exact problem(s) remain, i.e. have you come up with a new command based on some of the suggestions in here, or are you still wondering what to do next? Show us the current command you are using, and explain what problem(s) are still occurring. Basically, bring us up to speed again on the current state of the problem: Have you modified your commands any, or are we still at post #1 and looking for a solution?
Last edited by GrapefruiTgirl; 11-06-2010 at 03:07 PM.
Have you modified your commands any, or are we still at post #1 and looking for a solution?
Ha! I've yet to think through a solution, but aluser has provided me with the theoretical answer. I'll set my mind to it...
Using xargs as you do maybe worth exploring as a means to circumventing aluser's solution, which calls for further inventiveness. I find that the -0 option fails in other instances, however -see:
I don't have a solution for either question, per se. The reason I'm posting though is to let you know that the plowdown program is a straight Perl script.
I followed the link provided, downloaded the tarball and examined the source code. I wanted to verify whether plowdown could read download URLs directly from stdin. I'm not a Perl coder (unless I have to be), but it does not appear to support that.
Though, I'm sure someone with Perl experience (or someone who wants to learn) could add that ability. The plowdown command is actually just the "download.sh" file located in the src directory of the tarball.
So, in short, the "advanced" command given as an example on the linked page above is "too advanced" for plowdown--unless someone adds that capability.
Thanks Dark_Helmet for that information. If the program does not support input on stdin, OP needs to make either a loop, and run plowdown on each file, OR string together a list (a sequence) of files and give them all at once on the plowdown commandline.
I'd also like to suggest further:
based on what understanding of the thread so far, this is not an issue of positional parameters, but more of an issue of some data not coming out the end of a pipeline in the usable format we want; I'm still not 100% clear on how the two commands in the OP actually relate to each other (but no matter), but maybe a thread title (click Edit -> Advanced on post #1) edit would be appropriate to better describe the issues. If I were to change the thread title, even though there seem to be two similar but distinct (and related) problems here, I'd use maybe something like:
"problem piping grepped URLs into plowdown (a perl downloader)"
Plus, here's a take on this command:
Code:
curl http://some-website.com/page.html | \
grep -o "http://www.megaupload.com/[^\"< ]*" | uniq | head -n10 | while read line; do
plowdown "$line"
done
Anyhow, looking forward to OP's next post w/an update on things..
Last edited by GrapefruiTgirl; 11-06-2010 at 08:29 PM.
I realised that plowdown made use of wget, but not that was a perl script, so thankyou Dark_Helmet.
I have got both commands to work now, using various solutions posted here. I quite like aluser's suggestion of using the cat command ("`cat`"), as it deals with my ostensible issue of parsing piped data to a command. And yes GrapefruiTgirl, I can see that using two examples, one using pp's, the other not, confused this one problem.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.