[SOLVED] Extracting links recursively from url and saving them in text file?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ok, you can get rid of the directory structure with switch --no-directories as some members told you before.
However, my command should work nonetheless (from my side it's OK with a links.txt file full of links). So please provide the following outputs:
NB: regarding the last 2 commands, you might want to anonymize the outputs and give us bogus URLs instead. Actually, I'm only interested in seeing the global look of the output and noticing if there is any URLs pointing to .jpg/.jpeg resources...
The wget version is 1.18, but i am not sure how to post the output for the two wget commands. I tried exporting it to text file but it didn't worked.
^ Ok, commands are well processed. It's just hard for me to analyze their outputs as you have anonymized them quite a lot!
Yes, it's normal that wget downloads resources from outside folder dir1 because of its recursive mode enabled. You can activate --level=1 and/or --no-parent options if you want to disable that behavior. Is it better now?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.