Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
01-03-2005, 01:54 PM
|
#1
|
Member
Registered: Jul 2004
Distribution: Debian
Posts: 34
Rep:
|
squid speed problem
Hi,
I have Mandrake Linux 10.1 and SQUID 2.5.STABLE6.
It works fine except that cache hits are a little slow (about 400 KB/s). For example if I download a file from local web page it goes with 8 MB/s. If I download the same file second time it goes only with 400 KB/s. The same happens if I download something from the Internet.
Any help would be appreciated.
Here is my squid.conf file:
Code:
http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_dir diskd /squid 8192 16 256
cache_mem 256 MB
maximum_object_size 1 GB
maximum_object_size_in_memory 64 KB
cache_replacement_policy gdsf
cache_store_log none
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
half_closed_clients off
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
acl mynetwork src 192.168.69.0/255.255.255.0
http_access allow mynetwork
http_access allow localhost
http_reply_access allow all
icp_access allow all
visible_hostname myfirewall@mydomain.com
httpd_accel_host virtual
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
append_domain .no-ip.com
err_html_text admin@mydomain.com
deny_info ERR_CUSTOM_ACCESS_DENIED all
memory_pools off
coredump_dir /var/spool/squid
ie_refresh on
|
|
|
01-03-2005, 02:34 PM
|
#2
|
LQ Guru
Registered: May 2003
Location: INDIA
Distribution: Ubuntu, Solaris,CentOS
Posts: 5,522
Rep:
|
hi tyhre
squid soes not serves its purpose for small organizations
i have tried that
so now i am using ip forwarding whch is working better than squid
regards
|
|
|
01-03-2005, 09:35 PM
|
#3
|
Member
Registered: Jan 2003
Posts: 217
Rep:
|
change the following lines to these values and let me know how it works:
Code:
cache_mem 128 MB
maximum_object_size 20 MB
maximum_object_size_in_memory 256 KB
|
|
|
01-04-2005, 11:40 AM
|
#4
|
Member
Registered: Jul 2004
Distribution: Debian
Posts: 34
Original Poster
Rep:
|
I tried with cache_mem 128 and smaller object size. It doesn't make any difference. Except that objects smaller than 20 MB are transfered with 400 KB/s and bigger objects are transfered at Internet speed. Changing maximum_object_size_in_memory to 256 doesn't seem to make any difference. I think there is a limitation concerning cache hits, but I don't know exactly what's about.
Thanks anyway.
|
|
|
01-04-2005, 11:51 AM
|
#5
|
LQ Guru
Registered: May 2003
Location: INDIA
Distribution: Ubuntu, Solaris,CentOS
Posts: 5,522
Rep:
|
Quote:
Originally posted by zsolt_tuser
I tried with cache_mem 128 and smaller object size. It doesn't make any difference. Except that objects smaller than 20 MB are transfered with 400 KB/s and bigger objects are transfered at Internet speed. Changing maximum_object_size_in_memory to 256 doesn't seem to make any difference. I think there is a limitation concerning cache hits, but I don't know exactly what's about.
Thanks anyway.
|
what about IP forwading???
|
|
|
01-04-2005, 12:07 PM
|
#6
|
Member
Registered: Jul 2004
Distribution: Debian
Posts: 34
Original Poster
Rep:
|
Sorry masand, but I don't see your point. IP forwarding is one thing and proxy caching is another thing. Could you be more explicit please?
|
|
|
01-04-2005, 12:51 PM
|
#7
|
LQ Guru
Registered: May 2003
Location: INDIA
Distribution: Ubuntu, Solaris,CentOS
Posts: 5,522
Rep:
|
what i was saying is that proxy caching is more usefull for larger organizations
i tried proxy in my network where i have 4 computers
now i was getting more misses than hits, so i went for Ipfrowarding to share my internet connection, which is working fine now
if u r sole purpose is internet sharing then i think u can also try ipforwarding
regards
|
|
|
01-04-2005, 01:51 PM
|
#8
|
Member
Registered: Jan 2003
Posts: 217
Rep:
|
Quote:
Originally posted by zsolt_tuser
I tried with cache_mem 128 and smaller object size. It doesn't make any difference. Except that objects smaller than 20 MB are transfered with 400 KB/s and bigger objects are transfered at Internet speed. Changing maximum_object_size_in_memory to 256 doesn't seem to make any difference. I think there is a limitation concerning cache hits, but I don't know exactly what's about.
Thanks anyway.
|
I'm assuming that since you orginally set your max_object_size to 1GB, that your goal is to cache really big files (iso's, perhaps?). I'm thinking that your cache_dir of 8GB is pretty small in comparison. If you have the memory and disk space for it, try cranking up your disk cache.
BTW: what are your system specs?
|
|
|
01-04-2005, 03:14 PM
|
#9
|
Member
Registered: Jul 2004
Distribution: Debian
Posts: 34
Original Poster
Rep:
|
Yes, you're right! I'm trying to cache large files (iso's too). And here comes my problem. When the files are cached, further hits are very slow (comparing to network speed). Disk space is not a problem. I can create a larger partition for squid if I need. But I put cache_replacement_policy gdsf. So, bigger objects are discarded first. I don't need to cache large files many days, but a few hours or a day would be OK.
My specs are:
AMD Athlon 1300 MHz
768 MB RAM
HDD 120 GB
Mandrake Linux 10.1 (Comunity).
Thanks for your help.
|
|
|
01-04-2005, 06:59 PM
|
#10
|
Member
Registered: Jan 2003
Posts: 217
Rep:
|
If your priority is to cache huge files over smaller, hot objects, then lower your cache_mem and max_object_size_in_memory (defaults are fine). The idea here is to dedicate more of your RAM to the cache.
768MB may seem like a lot, but in actuality, squid can eat that up in no time (you also need to figure-in the amount of RAM Mandrake is using, and any other services on that box). If you aren't already doing so, I suggest you run your Mandrake squid box in runlevel 3 (without X). This should free-up a little more RAM to feed to the cache.
Keep in mind the "general" squid RAM requirments rule:
cache_mem + {10 * (GB of cache_dir)} + 20MB
So with your current config, here's how much RAM you need just to run suid:
256 + 80 + 20 = 356MB
This is just a rough guide, as I've seen the squid process use more than the "general" requirements. Adding a larger disk cache will consume even more RAM.
I've read in the squid mailing lists (great resource for this kind of problem, btw) that the gdsf replacement policy can sometimes cause this problem. You could try re-compiling squid back to the defaults and see if there is an improvement. Use the default config file, but only make the modifications to the max_object_size and cache_dir to accomodate caching of your isos and other huge files.
Squid is really a trial-and-error balancing act between RAM and disk. You just need to spend some time fine-tuning it.
Oh, yeah, and don't forget to monitor your Mandrake box to make sure it isn't running out of RAM and dipping aggressively into swap...that will surely slow-down your squid box. If you can afford it, or if you have a spare hard drive, you can get a nice performance increase from squid if you put the disk cache on a seperate drive.
Hope this helps.
Last edited by jymbo; 01-04-2005 at 07:03 PM.
|
|
|
05-09-2005, 09:15 PM
|
#11
|
Member
Registered: Jul 2004
Distribution: Debian
Posts: 34
Original Poster
Rep:
|
After almost half a year, I have managed to solve the problem. It seems that the problem was the Reiserfs partition. I have converted it to Ext3 and now all works fine. I wanted to share this information just in case someone else has the same problem.
|
|
|
05-10-2005, 05:00 PM
|
#12
|
Member
Registered: Jan 2003
Posts: 217
Rep:
|
I've read about ext3 as being the best filesystem of choice for the squid cache, but personally I've had better performance from reiserfs if I add the "notail, noatime" mount options in fstab for the cache.
In any case, glad to see you got it working the way you like.
|
|
|
All times are GMT -5. The time now is 08:41 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|