Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
It successfully blocked the first address, but not the second.
I tried other variations, using IP address instead, adding www. to front, but it not working. I tried many other addresses and had the same problem. How do I get this to work?
/etc/hosts is used to equate the NAME of a host with it's IP address.
Adding these names to the 127.0.0.1 would have some effect at preventing access not because it blocks the hosts but because it makes it seem like the host has an IP it doesn't really have. Any attempts to go to the host (www.google.com for example) would actually go to local host. If you had a web server running on the local host it would actually open that web page rather than the remote hosts.
Also you can't have multiple lines for a given IP address. It will read the hosts file from top to bottom and use the first hit only.
To assign multiple names you'd have to do something like:
127.0.0.1 localhost slashdot.org youtube.com
As you might imagine there would be a limit to how many host names you could put on one line like that. You really want the "localhost" entry there as well as that is what it really is.
However as mentioned above this is not the purpose of /etc/hosts. You should look for other ways to block hosts you don't want accessed. iptables comes to mind. Using iptables rather than blocking specific hosts you can just allow the hosts you do want which may be a smaller list.
In addition to what Jlightner wrote, say if whitelisting ain't gonna work, if it's just WWW traffic you could use Privoxy or another proxy to block. Adding this to the user.action:
Code:
{+block}
.doubleclick.net
would block all doubleclick.net HTTP hosts. You have to make sure all web traffic is routed through the proxy first and no other exits exist like tunnelling web traffic somewhere else.
If you still want to block at the DNS level then in Pdnsds (a caching nameserver) you can block complete domains like this:
Code:
neg {
name=doubleclick.net;
types=domain;
}
In this case you have to make sure people can't use *proxies* to route traffic through.
Also you can't have multiple lines for a given IP address.
Apparently by setting RESOLV_MULTI or "echo multi >> /etc/host.conf" and you can.
Apparently by setting RESOLV_MULTI or "echo multi >> /etc/host.conf" and you can.
That's a new one on me. Thanks for the info. Of course I still wouldn't use /etc/hosts for blocking but nice to know you could have multiple lines like that. I wonder what kind of lag having dozens of lines for a single IP might cause.
/etc/hosts is used to equate the NAME of a host with it's IP address.
Adding these names to the 127.0.0.1 would have some effect at preventing access not because it blocks the hosts but because it makes it seem like the host has an IP it doesn't really have. Any attempts to go to the host (www.google.com for example) would actually go to local host. If you had a web server running on the local host it would actually open that web page rather than the remote hosts.
Using /etc/hosts this way will in effect prevents resolving the name. It is an easy lightweight way to prevent access, and is commonly used. I don't think the original poster deserves a scolding on such a technicality.
Quote:
To assign multiple names you'd have to do something like:
127.0.0.1 localhost slashdot.org youtube.com
I tested it out using multiple lines and it worked.
Hosts.deny is to prevent a different host from connecting to your computer. It won't prevent browsing to a site. If the OP wants a kid proof solution, then only allowing access through a proxy would be effective enough. Someone could either edit /etc/hosts or boot up with a live distro to defeat restrictions on the same post. Then the only traffic possible is through the squid/dan's guardian proxy or maybe an appliance such as iBoss. ISPs often provide a filtering service for around $1/month. However, whether it is user configurable or just blocks porn sites, depends on the ISP.
Using /etc/hosts this way will in effect prevents resolving the name. It is an easy lightweight way to prevent access, and is commonly used. I don't think the original poster deserves a scolding on such a technicality.
I tested it out using multiple lines and it worked.
Try that, and you will find that you can enter "ping www.youtube.com" and ping yourself.
I indicated it could be done and even told him how to do it.
I also later indicated that I was unaware of the setting that allows for multiple lines. Your post therefore added nothing to that.
Its not the way I learned it in UNIX and I'm not sure it would work in any of the variants of that I use. My intent wasn't to "scold" but highlight the fact that it doesn't really BLOCK anything - it redirects to localhost. That might have unusual effects if one had port 80 or 8080 in use for some web service on the local host.
As to its commonality for such blocking use - I've been doing UNIX/Linux full time as an admin since 1991 and this is the first time I've seen it. Perhaps it's a difference between the way professionals do it and the way home users do it.
I wonder what kind of lag having dozens of lines for a single IP might cause.
You have the power of GNU/Linux, no need to wonder, just test it I'd say...
Anway. Since we apparently landed in detail-country: one other thing I forgot to add is that one of the consequences of using /etc/hosts for resolving purposes this way is you'll have to change lookup order in /etc/host.conf and /etc/nsswitch.conf since by default DNS querying precedes db's (file). (Also see: "null routing"). Maybe the idea for (ab)using /etc/hosts stems from the other O.S. where (online docs suggest using and) anti-malware applications do use the equivalent since the system itself does not provide any other easy and generic option.
Another thing I just realized was missed in the above discussion. This would redirect things input in the browser by name but if the user knew the correct IP they could bypass the redirect by simply typing the IP in the browser.
Since nslookup/dig/host don't actually interrogate /etc/hosts in Linux the user could use any of those commands to determine the real IP.
This discussion pops up occasionally on various forums; here's my 5 cents:
Using /etc/hosts to block a couple hosts (in the manner described here) is probably fine for a personal desktop. This will quickly get unwieldy as the list of hosts grows, though.
As mentioned, iptables can be used to drop outbound connections to those hosts (with the same caveat as the above).
IMO if you're getting into the "regulating outbound http traffic business", you're going to need to start using squid.
Since nslookup/dig/host don't actually interrogate /etc/hosts in Linux the user could use any of those commands to determine the real IP.
"strace -eopen" shows dig, host and nslookup do honour /etc/nsswitch.conf and /etc/resolv.conf. I already pointed to "hosts" lookup order change in nsswitch.conf.
This would redirect things input in the browser by name but if the user knew the correct IP they could bypass the redirect by simply typing the IP in the browser.
You're right. It shows why any form of whitelisting is the "easiest way out". Not that it's any consolation but some hosts do not react well to HTTP access by IP where an FQDN is expected...
I thought as you that these commands would look at /etc/hosts mainly because nslookup does on HP-UX. However I've proven to myself after having others point it out to me that it doesn't on Linux or Solaris or SCO. Posters have indicated ONLY HP-UX does this. You can do as I did and test doing nslookup and host for something you know is only in your /etc/hosts file and see what I mean.
It may be you're thinking of the underlying C routines gethostbyaddr gethostbyname etc... which DO interrogate /etc/hosts. To me this is a flaw (that is to say IMO HP-UX does it right) but most of the posters I've traded comments with don't like the fact that HP-UX does it the way it does. They point out that "ns" in "nslookup" means name server rather than file. Of course I point out that nslookup is deprecated in favor of host and host has no "ns" in it's name.
OK, I did the test, and I see what you mean. Wget does a read on /etc/hosts, dig doesn't, but reads /etc/resolv.conf (and in my case the DNS reads /etc/hosts).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.