Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I suspect the issue is that since you are downloading recursively Wget will always download HTML pages first to scan them for further links, no matter what (the manual has a section mentioning this).
Quote:
Originally Posted by wget manual
Note that these two options do not affect the downloading of html files (as determined by a ‘.htm’ or ‘.html’ filename prefix). This behavior may not be desirable for all users, and may be changed for future versions of Wget.
Note, too, that query strings (strings at the end of a URL beginning with a question mark (‘?’) are not included as part of the filename for accept/reject rules, even though these will actually contribute to the name chosen for the local file. It is expected that a future version of Wget will provide an option to allow matching against query strings.
Finally, it's worth noting that the accept/reject lists are matched twice against downloaded files: once against the URL's filename portion, to determine if the file should be downloaded in the first place; then, after it has been accepted and successfully downloaded, the local file's name is also checked against the accept/reject lists to see if it should be removed. The rationale was that, since ‘.htm’ and ‘.html’ files are always downloaded regardless of accept/reject rules, they should be removed after being downloaded and scanned for links, if they did match the accept/reject lists. However, this can lead to unexpected results, since the local filenames can differ from the original URL filenames in the following ways, all of which can change whether an accept/reject rule matches:
If the local file already exists and ‘--no-directories’ was specified, a numeric suffix will be appended to the original name.
If ‘--adjust-extension’ was specified, the local filename might have ‘.html’ appended to it.
If Wget is invoked with ‘-E -A.php’, a filename such as ‘index.php’ will match be accepted, but upon download will be named ‘index.php.html’, which no longer matches, and so the file will be deleted.
Query strings do not contribute to URL matching, but are included in local filenames, and so do contribute to filename matching.
This behavior, too, is considered less-than-desirable, and may change in a future version of Wget.
Many thanks for confirming my suspicions with regard to --reject.
I have encountered another issue with --wait: in orrder to avoid redundant download of reject pages, I generated a list of URLs; I ran wget with --input-file and without recursion. I found that the --wait time was ignored.
Is the --wait parameter only used for recursive downloads?
I looked at httrack half a year ago. Maybe I need to revisit.
You are right, Ruarí: I checked the log file and found that the --wait time was applied.
What had confused me was the fact that all files had the same timestamp. After the download, --convert-links is processed. With the recursive download, this was done after every download whereas with my non-recursive batch download, all files are downloaded first, then the conversions are applied.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.