Retrieving the links returned by the search engine would probably be very easy. Most search engines allow you to specify one line with the search term, like:
http://www.google.com/linux?q=word1+word2
All you have to do then is parse the output. A simple script with wget and egrep could probably do most of that.
Your biggest problem would be to have some kind of rating system which determines which links are usefull and which aren't. Just because the page has the same word you where searching for doesn't mean the page is usefull to you at all. Just having a bot which returns 10,000 links from 10 search engines isn't gonna reduce the work you have to do if you have to go through each link and see which one happens to be usefull.