Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have followed every piece of advice, consulted the man pages numerous times, and no syntax seems to work when downloading pictures with the files I give them. They consistently arrive on my hard drive with the original filename (e.g.: 2.jpg) instead of the more descriptive name I assign in the "curl [options] [URL]" string that downloads the file.
wget? Use wget? I would except there are some sites where wget pulls down a 'dummy" image instead of the one I see on the webpage. I suspect that might have something to do with assigning Konqueror as user-agent in my wgetrc instead of a more widely-used browser like Firefox. (There again I'm the maverick)
Why use a CLI downloader at all? Until I can bash script a browser, I will prefer the flexibility of curl, wget, Axel (which I love except for the fact that it doesn't preserve server timestamps) and aria2.
Not too long ago, I had bash scripts written for cUrl. Of course, that was a few cUrl versions back; if these new behaviours came as part of the development arc, I'll write scripts that invoke wget instead.
You'll notice the URL is in front of the 'local' filename; however that particular file downloaded as 5.jpg.
As I tried to indicate in my OP, I've done it both ways (local name before and after the URL) with no difference in the result. Might I need to tweak the .curlrc file, if such a thing exists?
Carver
Last edited by L_Carver; 12-02-2016 at 04:05 AM.
Reason: A little more info
curl http://dt.iki.fi/stuff/powerline-shell.png -o "stuff.jpg"
ls
stuff.jpg
how much did you edit your example? only the domain, or more?
is it part of a script? show us the script? does it work outside the script?
which curl version?
i did an strace on a file I downloaded today (one that, again, arrived with its remote name, not the one I tried to give it). Having no idea how to interpret the many lines of codegunk strace generates, I have no idea which of what is good or bad.
A pointless side-track, imo. I can post the code from another strace, certainly, if you can draw out what might be going wrong here, but it would make the post in which I pasted it especially long.
I read on one page that redirecting the downloaded file (">") and naming it in that command should work. I tried it and got an empty file with the name I gave it.
c0wb0y@laptop:~> strace -e open -o strace-curl.txt curl -o /tmp/test.png http://dt.iki.fi/stuff/powerline-shell.png
c0wb0y@laptop:~> grep 'test.png\|curlrc' strace-curl.txt
open("/home/c0wb0y/.curlrc", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/tmp/test.png", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 4
As you can see from the output, the png was downloaded and correctly named.
The output above can help you trace if you have any ~/.curlrc or even a global rc at /etc?
I tried your test string, minus the redirect to /tmp.
Quote:
Originally Posted by c0wb0y
The output above can help you trace if you have any ~/.curlrc or even a global rc at /etc?
I have a ~/.curlrc, unedited from a downloaded sample (I forget the site on which I found it). It looks like this:
Code:
#this is a sample .curlrc file
# store the trace in curl_trace.txt file. beware that multiple executions of the curl command will overwrite this file
--trace curl_trace.txt
# store the header info in curl_headers.txt file. beware that multiple executions of the curl command will overwrite this file
--dump-header curl_headers.txt
#change the below referrer URL or comment it out entirely
-e "https://www.google.com"
#change the below useragent string. get your/other UA strings from http://www.useragentstring.com/
-A "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13"
#some headers
-H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
-H "Upgrade-Insecure-Requests: 1"
-H "Accept-Encoding: gzip, deflate, sdch"
-H "Accept-Language: en-US,en;q=0.8"
Aside: I did edit my .wgetrc, putting the user-agent string for, I believe, Konqueror 4.1 in it.
It occurred to me just now I've been enclosing my 'wanted' file names in double quotes (an old habit from using Cygwin). Could that be why I'm not getting the filename I ask for?
Did you try:
- renaming your curlc so it does not get used
- double quote should have no bearing to filename creation.
- run strace similar to what I did so you can see how filename is created.
I'm getting just about every other download I do locally named; the others named as they are/were on the server. Not a script-able thing, as I see it, but as with everything else I'm finding different (and "bad") in this LinuxMint install, I feel like I'm edging closer to a solution. I'm starting to think some of the difficulty might be attributable to flaws, maybe even bugs, in my curl version that will, hopefully, be worked out in the next (or the next after that) dot-upgrade. And when you find something that's buggy, and you want more than to hope for resolution, the thing to do is communicate with the developers.
You'll notice the URL is in front of the 'local' filename; however that particular file downloaded as 5.jpg.
As I tried to indicate in my OP, I've done it both ways (local name before and after the URL) with no difference in the result. Might I need to tweak the .curlrc file, if such a thing exists?
Carver
This is "Is it plugged in" level.. but make sure the url is quoted?
I have curl 7.47.0 (x86_64-pc-linux-gnu) libcurl/7.47.0 and I don't have this behavior..
ls: cannot access 'imagegame.jpg': No such file or directory
The same thing both ways. Double-quoting is a must for my terminal applications; every time when I use single quotes I get cannot stat errors. My terminal apps have been ruined by my bad Cygwin habits, and this isn't the first Linux install this has happened to.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.