ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Don't forget that wget is just like a web browser (technically, is exactly like, except it doesn't render anything except to stdout). Whatever you could get by typing a URL into a GUI web browser, you can get with wget. As acid_kewpie mentions, wget can work recursively, meaning it can find and follow the links on a page you initially load.
I tried this command but it created a huge dir structure along with it. I don't want all the directories copied, just the files from the very last directory.
Quote:
Originally Posted by acid_kewpie
You don't necessarily know what files are there to download, you can't literally just request a wildcard.
you can use the -r option to download recursively, try a full command like "wget -r -l1 --no-parent -A.csv domain.com/a/b/c"
alternatively, curl is useful for downloading patterns of files like [000-999].csv would cover those 1000 files explicitly.
You must have either not transcribed the command correctly, or your wget does not support the --limit (-l; lower-case 'ell') option that would have limited the depth of recursion to one directory. Consult your man page for wget.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.