Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Debian and Fedora Core in equal measure
Posts: 264
Rep:
High Availability Squid Cache Servers?
I've got a (mostly Microsoft) network with access via a pair of Cisco ASA firewalls to the Internet.
I want to deploy two Squid proxies in some kind of HA arrangement so that all users will be configured to use the proxies (and can't browse through the firewalls direct) and will each use one or other proxy, and when either proxy fails, they failover to the other.
I've looked at Ultramonkey, LVS, Heartbeat and a bunch of other stuff, but I don't think I've come up with a decent solution. Has anyone done this before, and has anyone got a good solution, with howtos, configs, etc (yeah, I know I'm being lazy!)
Best option i think, which from the squid side is zero config, is to use a decent proxy.pac file, and do something like a hash of the uri to pick the order of what server to use. this makes for very even 50/50 load balancing (or 33/33/33 etc...) with guarenteed cache hits when the servers are all running. there are good examples around, but I can't remember the name of the best one, something like the "super mega proxy script" or something pretty similar.... actually just checking now and the site appears to have died. there's a copy of the good stuff here though: http://www.novell.com/coolsolutions/appnote/12952.html and so here you have NO HA config at all, there's just no need if you let each browser do the work for you. Distributed Computing!
to finish off the circle, i'd suggest putting the final script on an httpd instance on each squid box, and set their browsers (e.g via some AD gubbins with group policy, or wpad if you can get it working) to obtain the proxy.pac from the server. Having a simple centralized way to control how proxies are used by browsers it really useful in many other ways too, esp if you have internal web servers doing odd things too.
Last edited by acid_kewpie; 09-14-2008 at 07:21 AM.
Distribution: Debian and Fedora Core in equal measure
Posts: 264
Original Poster
Rep:
Hi Kewpie!
Thanks for the suggestion. I have been playing with proxy.pac files, with some level of success, but there's one issue that drove me towards trying HA, which was the idea of having an HA-delivered virtual IP address, so that rather than targeting a real IP address (which could fail) the client goes to a VIP, which will work as long as there is at least one working Proxy.
The reason for this is that once a windows client starts using proxy under control of proxy.pac, it doesn't release from that proxy if the proxy "goes away", and you have to reload the browser as a minimum, to get browsing to work with the alternative. I have some seriously "non-computer-literate" users for whom this may be too taxing and result in reams of "my browser's bust!" calls to the Help Desk.....
I thought that if you specified multiple proxies it'd try the later ones if it stopped hearing from the first. not what you're experiencing? browser specific implementation of course...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.