I'm not a Linux user but do find many linux tools work just fine under DOS in windows. GSAR is my current favourite.
I'm currently looking for a tool to remove the tedium of downloading files from some stock exhange sites (part of some reasearch) and found WGET does the downloading job.
WGET works just fine when I know the file name, for example if i send the comand:
I get the file I want. However, to find the file name to download defeats my goal as by the time I've found the name by navigating the site, I've already gone through the download process.
What I would like to do is get all "pdf" files from that directory (overnight) and browse them quickly locally each morning. However when I put in...
I've had a play with using some of the other WGET switches as suggested on other threads (U and cookies) but not to sure I know what I'm doing with these.
My guess is that the site is constructed in a manner to prevent bulk downloads. I'm wondering if there is a unix tool that will allow me to find out the names of the pdf in the web directory so I can use a batch file using WGET to download the list.
Any help much appreciated