Squid 2.6 DNS Timeout Issue
This is my first post, so be gentle with me... ;)
I am trying to replace an aging Microsoft ISA server (Windows 2000 Server, ISA Server 2000) with a Cent OS 5.4 server running Squid 2.6.
This box was built using the PBX in a Flash distro. I have the PBX up and running and three extensions working on my internal network. I eventually want my daughter at college to be able to establish a SIP connection back to this box, so I planned on having it replace the ISA server as my Internet Gateway/firewall. My next step was to get the Squid proxy running, then Sendmail, then a firewall package (haven't settled on one yet, but I like what I have seen of Endian).
My problem is that Squid cannot seem to resolve FQDNs when a client wants to surf out. I get the following message consistently, from either XP Pro, SP3 or my Ubuntu laptop, using either IE, Chrome or Firefox:
The requested URL could not be retrieved
While trying to retrieve the URL: http://www.yahoo.com/
The following error was encountered:
Unable to determine IP address from host name for www.yahoo.com
The dnsserver returned:
This means that:
The cache was not able to resolve the hostname presented in the URL.
Check if the address is correct.
Your cache administrator is root.
Generated Sun, 09 May 2010 13:24:44 GMT by sbs-pXp.asbs.yahoodns.net (squid/2.6.STABLE21)
Yet, I can ping the same address form the command line on the Cent OS box and get a reply.
Contents of resolve.conf:
# Generated by NetworkManager
# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
# DOMAIN=lab.foo.com bar.foo.com
Contents of Squid.conf (non-commented lines only shown):
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
icp_access allow all
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl apache rep_header Server ^Apache
acl Errantry-Local src 192.168.0.1/255.255.255.0
http_access allow manager localhost
http_access allow localhost
http_access allow Errantry-Local
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
broken_vary_encoding allow apache
The Internet connection is a Bell South/AT&T DSL line with a static IP, connected through a Westell DSL 2+ router in IP Passthrough mode. ETH0 of the Cent OS box gets DHCP from this router in the 192.168.2 network; DNS is set to be the 192.168.2.254 address of the Westell. ETH1 is static on the internal network, 192.168.0.110. Clients connect to this address at port 8080.
This same setup works fine for the ISA box. I think my problem has to be in the Squid.conf file, because (a) ISA works with the Westell using DNS from the Westell router, (b) Cent OS can ping out and get replies from the command line and (c) multiple clients experience the same problem.
Any help is appreciated.
Shawn, just your Average_joe...
It seems that you have not allowed the rules which gives access to your local network.
You have to add the below two lines to your squid.conf file as
acl myNetwork src 192.168.2.0/255.255.255.0
http_access allow myNetwork
add these lines above the line
"http_access deny all"
'coz sequence matters here..
In your browser you should set up proxy to point to your squid server's IP and port to 8080 to access the web.
Restart squid and you should be able to browse your way.
Hope this helps.
Still Not Working
Thank you for the suggestion to add 192.168.1.0 to the list of allowed networks. I didn't think this was going to fix the issue, but I applied the change anyway and all clients still have the same issue.
The 192.168.1.0 network is between the Westell modem and the Squid server only. There are no other devices on that network. The client network is 192.168.0.0. Please see the attached diagram.
The clients do not have an issue connecting to the squid server and do get a reply from the server, using 192.168.0.110, port 8080 for their proxy settings.
Researching this further, I looked at the DNS options in Squid and found a reference to any such errors being written to /var/log/squid/cache.log. In there, I found a bunch of entries like so (date and time stamp removed):
comm_udp_sendto: FD 6, 192.168.1.254, port 53: (22) Invalid argument
idnsSendQuery: FD 6: sendto: (22) Invalid argument
I googled the second line and found a bug report here:
Looking at this, I saw that I had also tried to restrict incoming UDP cache to my local 192.168.0.0 network. Removing the line udp_incoming_address 192.168.0.110 and going back to the default udp_incoming_address 0.0.0.0 seems to have fixed the problem, and I am writing this reply using the Squid server to proxy my XP desktop.
in cache.log ....comm_udp_sendto: FD 6, 188.8.131.52, port 53: (105) No buffer space available
hi everyone i am using squid2.6..
when client uses internet they faces problem
buffer space is not available(105)..
when i chk the cache_log file
comm_udp_sendto: FD 6, 184.108.40.206, port 53: (105) No buffer space available
plz smbody help me
|All times are GMT -5. The time now is 05:57 PM.|