how to get all the pdf files of a website recursive ?
how to get all the pdf files of a website recursive ?
Quote:
|
Do you mean the html files? (eg my site has no pdf files)
Have you looked at wget? |
yes yes
and wget cannot get all the pdf files, wget -r -l15 -A.pdf http://www.rasmusen.org/ and when I do : wget -r -nd --convert-links -l15 -A.pdf http://www.rasmusen.org/ it does not convert any single link |
Have a look at httrack [1]. It is a website copier and you can specify which files to include respectively exclude.
[1] http://www.httrack.com/ |
All times are GMT -5. The time now is 06:46 AM. |