Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
|
01-21-2007, 04:52 PM
|
#1
|
Member
Registered: Aug 2005
Distribution: Debian 7
Posts: 526
Rep:
|
/etc/hosts doesn't always block sites
I tested /etc/hosts by adding the lines:
127.0.0.1 slashdot.org
127.0.0.1 youtube.com
It successfully blocked the first address, but not the second.
I tried other variations, using IP address instead, adding www. to front, but it not working. I tried many other addresses and had the same problem. How do I get this to work?
|
|
|
01-21-2007, 05:14 PM
|
#2
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
/etc/hosts is NOT used to BLOCK hosts!
/etc/hosts is used to equate the NAME of a host with it's IP address.
Adding these names to the 127.0.0.1 would have some effect at preventing access not because it blocks the hosts but because it makes it seem like the host has an IP it doesn't really have. Any attempts to go to the host ( www.google.com for example) would actually go to local host. If you had a web server running on the local host it would actually open that web page rather than the remote hosts.
Also you can't have multiple lines for a given IP address. It will read the hosts file from top to bottom and use the first hit only.
To assign multiple names you'd have to do something like:
127.0.0.1 localhost slashdot.org youtube.com
As you might imagine there would be a limit to how many host names you could put on one line like that. You really want the "localhost" entry there as well as that is what it really is.
However as mentioned above this is not the purpose of /etc/hosts. You should look for other ways to block hosts you don't want accessed. iptables comes to mind. Using iptables rather than blocking specific hosts you can just allow the hosts you do want which may be a smaller list.
|
|
|
01-21-2007, 06:54 PM
|
#3
|
Senior Member
Registered: Aug 2003
Location: Berkeley, CA
Distribution: Mac OS X Leopard 10.6.2, Windows 2003 Server/Vista/7/XP/2000/NT/98, Ubuntux64, CentOS4.8/5.4
Posts: 2,986
Rep:
|
Heh, suppose you could use an iptables rule and block everything out and specify the ones you want to go out.
iptables -A OUTPUT -p tcp -j DROP
Or just use a proxy server for websites you want to block
Last edited by Micro420; 01-21-2007 at 06:55 PM.
|
|
|
01-21-2007, 06:56 PM
|
#4
|
Moderator
Registered: May 2001
Posts: 29,415
|
In addition to what Jlightner wrote, say if whitelisting ain't gonna work, if it's just WWW traffic you could use Privoxy or another proxy to block. Adding this to the user.action:
Code:
{+block}
.doubleclick.net
would block all doubleclick.net HTTP hosts. You have to make sure all web traffic is routed through the proxy first and no other exits exist like tunnelling web traffic somewhere else.
If you still want to block at the DNS level then in Pdnsds (a caching nameserver) you can block complete domains like this:
Code:
neg {
name=doubleclick.net;
types=domain;
}
In this case you have to make sure people can't use *proxies* to route traffic through.
Also you can't have multiple lines for a given IP address.
Apparently by setting RESOLV_MULTI or "echo multi >> /etc/host.conf" and you can.
|
|
|
01-21-2007, 07:04 PM
|
#5
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
Quote:
Apparently by setting RESOLV_MULTI or "echo multi >> /etc/host.conf" and you can.
|
That's a new one on me. Thanks for the info. Of course I still wouldn't use /etc/hosts for blocking but nice to know you could have multiple lines like that. I wonder what kind of lag having dozens of lines for a single IP might cause.
|
|
|
01-21-2007, 07:13 PM
|
#6
|
LQ Guru
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733
|
Quote:
Originally Posted by jlightner
/etc/hosts is NOT used to BLOCK hosts!
/etc/hosts is used to equate the NAME of a host with it's IP address.
Adding these names to the 127.0.0.1 would have some effect at preventing access not because it blocks the hosts but because it makes it seem like the host has an IP it doesn't really have. Any attempts to go to the host ( www.google.com for example) would actually go to local host. If you had a web server running on the local host it would actually open that web page rather than the remote hosts.
|
Using /etc/hosts this way will in effect prevents resolving the name. It is an easy lightweight way to prevent access, and is commonly used. I don't think the original poster deserves a scolding on such a technicality.
Quote:
To assign multiple names you'd have to do something like:
127.0.0.1 localhost slashdot.org youtube.com
|
I tested it out using multiple lines and it worked.
You can also use addresses like
172.0.0.10 www.youtube.com youtube.com
172.0.0.11 www.slashdot.org slashdot.org
Try that, and you will find that you can enter "ping www.youtube.com" and ping yourself.
Last edited by jschiwal; 01-21-2007 at 07:18 PM.
|
|
|
01-21-2007, 07:19 PM
|
#7
|
Member
Registered: Oct 2003
Location: Canada
Distribution: ArchLinux && Slackware 10.1
Posts: 298
Rep:
|
To disallow hosts you need to use /etc/hosts.deny
|
|
|
01-21-2007, 08:20 PM
|
#8
|
LQ Guru
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733
|
Hosts.deny is to prevent a different host from connecting to your computer. It won't prevent browsing to a site. If the OP wants a kid proof solution, then only allowing access through a proxy would be effective enough. Someone could either edit /etc/hosts or boot up with a live distro to defeat restrictions on the same post. Then the only traffic possible is through the squid/dan's guardian proxy or maybe an appliance such as iBoss. ISPs often provide a filtering service for around $1/month. However, whether it is user configurable or just blocks porn sites, depends on the ISP.
Last edited by jschiwal; 01-21-2007 at 08:31 PM.
|
|
|
01-21-2007, 10:06 PM
|
#9
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
Quote:
Originally Posted by jschiwal
Using /etc/hosts this way will in effect prevents resolving the name. It is an easy lightweight way to prevent access, and is commonly used. I don't think the original poster deserves a scolding on such a technicality.
I tested it out using multiple lines and it worked.
You can also use addresses like
172.0.0.10 www.youtube.com youtube.com
172.0.0.11 www.slashdot.org slashdot.org
Try that, and you will find that you can enter "ping www.youtube.com" and ping yourself.
|
I indicated it could be done and even told him how to do it.
I also later indicated that I was unaware of the setting that allows for multiple lines. Your post therefore added nothing to that.
Its not the way I learned it in UNIX and I'm not sure it would work in any of the variants of that I use. My intent wasn't to "scold" but highlight the fact that it doesn't really BLOCK anything - it redirects to localhost. That might have unusual effects if one had port 80 or 8080 in use for some web service on the local host.
As to its commonality for such blocking use - I've been doing UNIX/Linux full time as an admin since 1991 and this is the first time I've seen it. Perhaps it's a difference between the way professionals do it and the way home users do it.
|
|
|
01-22-2007, 05:21 AM
|
#10
|
Moderator
Registered: May 2001
Posts: 29,415
|
I wonder what kind of lag having dozens of lines for a single IP might cause.
You have the power of GNU/Linux, no need to wonder, just test it I'd say...
Anway. Since we apparently landed in detail-country: one other thing I forgot to add is that one of the consequences of using /etc/hosts for resolving purposes this way is you'll have to change lookup order in /etc/host.conf and /etc/nsswitch.conf since by default DNS querying precedes db's (file). (Also see: "null routing"). Maybe the idea for (ab)using /etc/hosts stems from the other O.S. where (online docs suggest using and) anti-malware applications do use the equivalent since the system itself does not provide any other easy and generic option.
|
|
|
01-22-2007, 09:44 AM
|
#11
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
Well if I wanted to test it - I was just musing.
Another thing I just realized was missed in the above discussion. This would redirect things input in the browser by name but if the user knew the correct IP they could bypass the redirect by simply typing the IP in the browser.
Since nslookup/dig/host don't actually interrogate /etc/hosts in Linux the user could use any of those commands to determine the real IP.
|
|
|
01-22-2007, 10:19 AM
|
#12
|
Senior Member
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora
Posts: 3,935
Rep:
|
This discussion pops up occasionally on various forums; here's my 5 cents:
Using /etc/hosts to block a couple hosts (in the manner described here) is probably fine for a personal desktop. This will quickly get unwieldy as the list of hosts grows, though.
As mentioned, iptables can be used to drop outbound connections to those hosts (with the same caveat as the above).
IMO if you're getting into the "regulating outbound http traffic business", you're going to need to start using squid.
|
|
|
01-22-2007, 10:43 AM
|
#13
|
Moderator
Registered: May 2001
Posts: 29,415
|
Since nslookup/dig/host don't actually interrogate /etc/hosts in Linux the user could use any of those commands to determine the real IP.
"strace -eopen" shows dig, host and nslookup do honour /etc/nsswitch.conf and /etc/resolv.conf. I already pointed to "hosts" lookup order change in nsswitch.conf.
This would redirect things input in the browser by name but if the user knew the correct IP they could bypass the redirect by simply typing the IP in the browser.
You're right. It shows why any form of whitelisting is the "easiest way out". Not that it's any consolation but some hosts do not react well to HTTP access by IP where an FQDN is expected...
|
|
|
01-22-2007, 12:07 PM
|
#14
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
I thought as you that these commands would look at /etc/hosts mainly because nslookup does on HP-UX. However I've proven to myself after having others point it out to me that it doesn't on Linux or Solaris or SCO. Posters have indicated ONLY HP-UX does this. You can do as I did and test doing nslookup and host for something you know is only in your /etc/hosts file and see what I mean.
It may be you're thinking of the underlying C routines gethostbyaddr gethostbyname etc... which DO interrogate /etc/hosts. To me this is a flaw (that is to say IMO HP-UX does it right) but most of the posters I've traded comments with don't like the fact that HP-UX does it the way it does. They point out that "ns" in "nslookup" means name server rather than file. Of course I point out that nslookup is deprecated in favor of host and host has no "ns" in it's name.
|
|
|
01-22-2007, 05:37 PM
|
#15
|
Moderator
Registered: May 2001
Posts: 29,415
|
OK, I did the test, and I see what you mean. Wget does a read on /etc/hosts, dig doesn't, but reads /etc/resolv.conf (and in my case the DNS reads /etc/hosts).
|
|
|
All times are GMT -5. The time now is 02:45 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|