Download Dropbox Webpage
Hello , i have benn trying with wget and lynx to view or download the entire webpage and somehow it appears empty .
Is there anyway to dump the webpage ? example link with 3 wav files from windows https://www.dropbox.com/sh/amaeuqdkb...oZikkUpda?dl=0 Not working Code:
wget --no-check-cert -q --refer=http://google.com user-agent="Mozilla/61.0 Firefox/61.0.0" https://www.dropbox.com/sh/amaeuqdkb2sgddv/AACd9zfKF7PgGyPtoZikkUpda/ -O tmpfile Code:
lynx -dump -listonly https://www.dropbox.com/sh/amaeuqdkb2sgddv/AACd9zfKF7PgGyPtoZikkUpda/ Note : Dont even think in writing here "Why dont you install the dropbox tool from their website". Thank you |
Hi,
Running the following will download a zip archive containing the 3 wav files: Code:
curl https://www.dropbox.com/sh/amaeuqdkb2sgddv/AACd9zfKF7PgGyPtoZikkUpda -L -o file.zip |
The problem is that i want the webpage .
|
wget creates a file. If you inspect the head of the file you can see it's a zip file - rename with a .zip extension and unzip will open it. You get the wav files in the dropbox.
As for the webpage - you can use the developer tools for your browser and inspect it if you really want to unravel it. |
Quote:
Quote:
And when i speak about lynx and wget then i am not speaking in graphic user interface but CLI or terminal , basically bash , and i posted on the programming thread , witch means i am looking for a solution over the terminal . I am pretty aware that firefox does that , but what i am writing must be executed on a shell , so firefox is out of the question . |
Have you looked at curl?
|
I was able to get a non-empty HTML response with HTTPie.
My command was literally just Code:
http https://www.dropbox.com/sh/amaeuqdkb2sgddv/AACd9zfKF7PgGyPtoZikkUpda/ |
Thank you Dugan , that was exactly what i was looking for .
|
Forgot to mention that using dugan solution piping to a file will do the desired output like :
Code:
http https://www.dropbox.com/sh/amaeuqdkb2sgddv/AACd9zfKF7PgGyPtoZikkUpda/ > tmp.file Code:
http https://www.dropbox.com/sh/amaeuqdkb2sgddv/AACd9zfKF7PgGyPtoZikkUpda/ -o tmp.file And to grab all the links in case you need then just use this line : Code:
grep -Eo "https?://\S+?\dl=0" tmp.file | sort | uniq |
All times are GMT -5. The time now is 12:26 AM. |