I'm terrible at writing shell scripts. A little help fetching files?
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I don't know perl, but that's not a shell script - shell scripts are only interpreted by command prompts (eg bash, ash etc)
[Edit]
Even thou I don't write perl, I noticed some things -
Your second while loop does not close the brackets.
x is not reset before running the second loop
you need the full web-path for wget, not just the extension.
What's with the // before the 'website'?
Re writen, that should give you (in I-am-guessing-perl) :
SYNOPSIS
curl [options] [URL...]
DESCRIPTION
curl is a client to get documents/files from or send docu_
ments to a server, using any of the supported protocols
(HTTP, HTTPS, FTP, GOPHER, DICT, TELNET, LDAP or FILE).
The command is designed to work without user interaction
or any kind of interactivity.
curl offers a busload of useful tricks like proxy support,
user authentication, ftp upload, HTTP post, SSL (https:)
connections, cookies, file transfer resume and more.
URL
The URL syntax is protocol dependent. You'll find a
detailed description in RFC 2396.
You can specify multiple URLs or parts of URLs by writing
part sets within braces as in:
http://site.{one,two,three}.com
or you can get sequences of alphanumeric series by using
[] as in:
ftp://ftp.numericals.com/file[1-100].txt
ftp://ftp.numericals.com/file[001-100].txt (with lead_
ing zeros)
ftp://ftp.letters.com/file[a-z].txt
It is possible to specify up to 9 sets or series for a
URL, but no nesting is supported at the moment:
http://www.any.org/archive[1996-1999]/vol_
ume[1-4]part{a,b,c,index}.html
You can specify any amount of URLs on the command line.
They will be fetched in a sequential manner in the speci_
fied order.
Curl will attempt to re-use connections for multiple file
transfers, so that getting many files from the same server
will not do multiple connects / handshakes. This improves
speed. Of course this is only done on files specified on a
single command line and cannot be used between separate
curl invokes.
Firstly, as someone else pointed out this is not a shell script, this is a perl script. Secondly, if you are going to use perl, then use all the tools that perl gives you, So there is really no need to fork a system call, rather use LWP::Simple. Also always use strict. Try something like this:
Code:
#!/usr/bin/perl
use LWP::Simple;
use warnings;
use strict;
my $x = 0;
my $url = "http://website/images/";
my $file = 'wc';
while (1) {
$x++;
my $file_name = $file . sprintf("%02d", $x) . '.gif';
print "Getting $file_name....";
my $return_value = getstore($url . $file_name, $file_name);
print "Got it!\n" if $return_value == 200;
print "Can't get it!" && last if $return_value =~ /^4/;
}
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.