To my delight, I found a website containing loads of scans of old computer magazines. Fancying a wallow in some nostalgia, but without wanting to download each jpeg scan individually, I did a bit of Googling and found the Linux 'wget' command.
However, when I use it in this syntax:
wget
ftp://ftp.worldofspectrum.org/pub/si...ue8112/Pages/*
.... it seems to "log on and log off" each time a file is downloaded (so about once every couple of seconds!). My shell screen is filled with messages like this one below, for every file downloaded:
--2009-05-25 09:46:24--
ftp://ftp.worldofspectrum.org/pub/si...r811200100.jpg
=> `YourComputer811200100.jpg'
==> CWD not required.
==> PASV ... done. ==> RETR YourComputer811200100.jpg ... done.
Length: 236257 (231K)
100%[======================================>] 236,257 375K/s in 0.6s
2009-05-25 09:46:25 (375 KB/s) - `YourComputer811200100.jpg' saved [236257]
I was just wondering, before I download any more using wget, if there were an option for just having the files downloaded with minimum feedback, or whether I am in fact using wget correctly in this case.
I did have a scan through a wget manual (this does *not* appear to be a simple command!), and the nearest I found was a way to write output to a text file rather than the shell screen.
Am I using wget optimally for downloading small and numerous files from one ftp address in this way?
Steve
P.S. Am quite impressed that Linux has a built-in command for doing this kind of thing. In the old days when I used Windows, I needed to install separate software to bulk-download multiple files like this. Go-Zilla, I think it was called. With Linux, it's just all there!