Blocking Web crawlers, bots, spiders, proxies, etc from private site areas
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Blocking Web crawlers, bots, spiders, proxies, etc from private site areas
Hi
I have a web site that runs a number of different web applications; joomla, bugzilla, firestats, nagios, etc.
While I'm happy for my public site's content to be spidered and cached I would prefer certain apps such as bugzilla to not be, rather I would like to them to accessible via the web for employees, customers, etc but not be publicly advertised or searchable.
Is this something I should be worrying about, and, if so, how do I reduce the ability of say spiders to spider this content?
If you find people shouldn't have unrestricted access to some information for whatever reason the answer is "yes".
Quote:
Originally Posted by acid_kewpie
formally you would use the robots.txt file.
IMHO a robots.txt should fit in a set of measures like DMZ, firewall or per application or webserver configurable access restrictions and authentication, usage of HTTPS, reverse proxies, tunneling and whatnot. The best way to select stuff to implement is to look at what your nfo is worth.
First of all I should say that password protection should stop robots from accessing valuable data, but I guess I want to be proactive and don't what web searches turning up links to my private app area.
Next, I have configured a robots.txt and expect those robots that respect the configuration to not index those parts of my site that are off limits. However, I'm taking the approach that not every bot is good (e.g. they are not good bots like google, yahoo, curl, etc) and that there will be those bots that either, a) ignore my robots.txt, or worse, b) use robots.txt to carry out malicious attacks.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.