I am new here. I want to ask how can I get xml file of any web page directly, by giving its url as input.
Actually I am working on extracting data from youtube. I want to extract all youtube comments with its username and time of comment in a particular text file.
I have used youtube api for this but not actually getting what to do.
So, for making it simpler I want to get xml file of it and then I will extract particular tags.
The xml file I am getting is not complete. It is showing user upto 50.
So I want to make a bash script or perl script to do this.
As in command I can give youtube url and it will give me xml of it in a text file.
Can any one help me with this.?
Please help me its too urgent.