I am, as the forum title suggests, new to linux and to programming and having trouble figuring out how to do this.
I have a very large XML file with a lot of information in it. I'm trying to get a single tag out of the file, each of these tags contains a single web link and I want to download the file at every single one of those links. I really don't know how to do this.
My thought, though its probably not the most efficient or correct way, was to use VIM to search the document and somehow extract all of this one particular tag and then use wget on the links. Really I don't know how to do any of the above or if its even a good solution.
Thanks in advance for help