Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Zen Walk 4.2, Slackware 11, Debian 3.1
Posts: 13
Rep:
curl Question
Hi. I purchased a book that comes with access to an online archive of images, and I want to download all of them. The website is only set up to download the images one at a time, though, so I want to use a program to automatically download them.
Found
The document has moved here.
Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5.10 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g mod_perl/2.0.3 Perl/v5.8.8 Server at www.taschen.com Port 80
This is the first time I've downloaded something in this way, any help is greatly appreciated. I'm using curl, btw, because I'm using Mac OS, which does not come with wget.
Distribution: Zen Walk 4.2, Slackware 11, Debian 3.1
Posts: 13
Original Poster
Rep:
That's the thing—the url that I put in the curl command works fine when I type it in firefox; the download starts like any other zip file, though if I haven't logged in to the website already it redirects me to a login page. So I imagine the issue is getting curl to look like an authenticated user.
Hi. I purchased a book that comes with access to an online archive of images, and I want to download all of them. The website is only set up to download the images one at a time, though, so I want to use a program to automatically download them.
Found
The document has moved here.
Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5.10 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g mod_perl/2.0.3 Perl/v5.8.8 Server at www.taschen.com Port 80
This is the first time I've downloaded something in this way, any help is greatly appreciated. I'm using curl, btw, because I'm using Mac OS, which does not come with wget.
Looks lie you have to specify -L option. From curl manpage:
Code:
-L/--location
(HTTP/HTTPS) If the server reports that the requested page has
moved to a different location (indicated with a Location: header
and a 3XX response code), this option will make curl redo the
request on the new place.
Hi. I purchased a book that comes with access to an online archive of images, and I want to download all of them. The website is only set up to download the images one at a time, though, so I want to use a program to automatically download them.
Found
The document has moved here.
Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5.10 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g mod_perl/2.0.3 Perl/v5.8.8 Server at www.taschen.com Port 80
This is the first time I've downloaded something in this way, any help is greatly appreciated. I'm using curl, btw, because I'm using Mac OS, which does not come with wget.
Looks like you have to specify -L option. From curl manpage:
Code:
-L/--location
(HTTP/HTTPS) If the server reports that the requested page has
moved to a different location (indicated with a Location: header
and a 3XX response code), this option will make curl redo the
request on the new place.
Distribution: Zen Walk 4.2, Slackware 11, Debian 3.1
Posts: 13
Original Poster
Rep:
Thanks for the help. I've figured out how to download a file now by using a firefox add-on called "live headers" to look at the cookies used by firefox, and then using a curl command like
Distribution: Zen Walk 4.2, Slackware 11, Debian 3.1
Posts: 13
Original Poster
Rep:
It turns out that curl cannot download recursively, which is a shame. It can download a range of sequentially numbered files, though. The files that I wanted to download were not named sequentially, but the html files that link to them are, so I downloaded all of them with curl and put them into a text file with this command
then I used grep and a text editor to make a file with just the filenames that I wanted, ("file_name.zip", for example, with each name on a seperate line) and used a bash script to download them with curl:
Code:
#!/bin/bash
for name in `cat file_list.txt`
do
curl -b "name1=value1; name2=value2" http://www.example.com/$name -O
done
exit 0
It turns out that curl cannot download recursively, which is a shame. It can download a range of sequentially numbered files, though. The files that I wanted to download were not named sequentially, but the hmtl files that link to them are, so I downloaded all of them with curl and put them into a text file with this command
then I used grep and a text editor to make a file with just the filenames that I wanted, ("file_name.zip", for example, with each name on a seperate line) and used a bash script to download them with curl:
Code:
#!/bin/bash
for name in `cat file_list.txt`
do
curl -b "name1=value1; name2=value2" http://www.example.com/$name -O
done
exit 0
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.