Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm running a server on Centos 4, that provides Squid, DHCP, and DNS. Nothing fancy, just a standard gateway machine. There are only about 50 clients behind it.
We have a 2 Mbps leased line.
Now, everything works fine.. till 4 PM (1600 hrs). Suddenly, the net connection slows down a lot, and browsing becomes really bad. If I use wget to start a direct download from the gateway, I get full speeds (220 KBps), so it's not an upstream problem.
I have looked at crontab, cron.d and there is nothing in there that should cause Squid to timeout. Cron.daily is set to default at 4 AM, not PM, and I haven't changed anything in this.
After about 45 minutes, the net works fine again.
I have run a tail -f on /var/log/messages. Nothing shows up
I also ran a tail -f on /var/log/squid/cache.log. Nothing shows up.
Any ideas? What am I missing? It's driving me batty. I'll be happy to provide more information as required. O Squid gurus, help!
Here's my squid.conf
-------
http_port 3128
#We recommend you to use the following two lines.
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 640 MB
#Suggested default:
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
dns_nameservers 192.168.1.4 208.67.222.222 208.67.220.220 202.138.103.100 202.138.96.3
#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
### Added extra
acl lan src 192.168.1.0/255.255.255.0
acl allow_host src 192.168.1.10
acl allow_host src 192.168.1.178
http_access allow allow_host
#### END
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
cache_dir aufs /var/spool/squid 4000 16 256
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
# And finally deny all other access to this proxy
http_access allow localhost
# Block all porn sites.....
acl porn url_regex "/etc/squid/porn.txt"
acl noporn url_regex "/etc/squid/noporn.txt"
#acl porn url_regex "/etc/squid/porn.txt"
http_access allow noporn
http_access deny porn
####END
### Enable below once inetrnet is up and disable above ####
http_access allow lan
http_access deny all
# and finally allow by default
http_reply_access allow all
visible_hostname firewall
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
File descriptor usage for squid:
Maximum number of file descriptors: 1024
Largest file desc currently in use: 115
Number of file desc currently in use: 46
Files queued for open: 0
Available number of file descriptors: 978
Reserved number of file descriptors: 100
Store Disk files open: 0
I would think this is ok, right? Even at peak load, it always has at least 300 file descriptors free. Is this enough?
"Install 'iftop', then run it when the connection's going slow. You should be ably to see what's hogging your bandwidth from there."
I'll try this tomorrow, and let you know what happening. The thing is though, even when no clients are connected (I've physically disconnected everything except one Apple Mac and tried) the browsing is still dead slow. But direct downloads... they are fast! It's not a DNS issue, cause I have tried OpenDNS and others, and DNS works just fine. Still, I'll give this a shot and let you know what I find.
Thanks for the help so far, and keep it coming, folks! I feel like shooting that damn server sometimes.
Yup, checked top. The Squid process uses around 8-12% of CPU, and there's nothing much else running. Memory is mostly free too. I have Cacti graphing CPU and Memory, and both are nowhere near capacity.
Don't worry about stating the obvious, please.. I'm sure this is going to turn out to be a D'oh! moment.. it has that feel about it.
Ok, so Iftop is installed and I'm keeping an eye on things. I'm also running Wireshark on another machine, to make sure there's no weird network storms or something like that happening. I'll post results as I get them. Any other tools I could be running?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.