Originally Posted by Rachit7
what is the command for copying multiple lines from a particular webpage ????
What step are you stuck with? To actually get the webpage you can use wget to download it to a local directory.
Then you can process the html with other command line tools. If you do not want to deal with the html code but only want to pull plain text lynx is a good solution. It has a command line option which lets you download a webpage and convert it to plain text.
E.g. like so
lynx -nonumbers -dump http://www.example.com/
The above command will print the content of the page to stdout, you may want to pipe it into another bash command or redirect it to a file on your disk.
If e.g. you wanted to only get the first three lines you could do
lynx -nonumbers -dump http://www.example.com/ | head -n 3