by "run url" i'm going to assume you want to check it (and not go to some page and run some youtube video or something every ten minutes
). the curl command is useful for this
i would suggest dumping the output into a file and then grepping explicitly for something you know is constant (like the title). yahoo is a bad example, but it was just an example i used
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
there you have the title, so the page itself returned (whether the content is what you expected or not is a different question / issue). so in your case, where the page requested is not the top level,
(you need the trailing /)
with the wget command, you can get files from a location. so if you were trying to get a constant update of accessible logs or pix or something, you could use that
if you want to get the access code (200, 302, 400, 500), well, i don't know... i have been trying to do this with telnet to get the access code info:
[glamiss@myserver ~]$ telnet 0 80
Connected to 0.
Escape character is '^]'.
HTTP/1.1 200 OK
but i don't know how to dump my telnet output