LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   SQUID-Reverse Proxy (https://www.linuxquestions.org/questions/linux-newbie-8/squid-reverse-proxy-617941/)

haariseshu 02-01-2008 09:18 AM

SQUID-Reverse Proxy
 
Dear Friends, In the squid proxy if we are using multiple web server behind the proxy means how to make my squid to listen both the servers. If u r having any idea regarding this pls share. and any doubt regarding DNS, NTP, RADIUS products are welcome.....
Thank you,
HARI.

ronny 02-01-2008 03:57 PM

Quote:

Originally Posted by haariseshu (Post 3042466)
Dear Friends, In the squid proxy if we are using multiple web server behind the proxy means how to make my squid to listen both the servers. If u r having any idea regarding this pls share. and any doubt regarding DNS, NTP, RADIUS products are welcome.....
Thank you,
HARI.

Your English needs a little work; the answer to your question depends on what you're actually asking - so I'll answer both possible interpretations.

If you mean you have two different servers hosting two different domains, put the two different servers in your /etc/hosts:

192.168.88.1 www.domain1.net
192.168.88.2 www.domain2.net

then set up your squid acls in squid.conf to allow connections to both domains:

acl all src 0.0.0.0/0.0.0.0
acl localserv dst 192.168.88.0/255.255.255.0
http_access deny !Safe_ports
http_access allow localserv Safe_ports

If you mean to have one domain spread across two *different* hosts, the classic methods involve DNS tricks such as multiple A entries in your DNS server configuration. Unfortunately these will typically fail in a proxy configuration as squid will helpfully cache its DNS lookup results and hit the same server repeatedly. (This might be made to work with local DNS on a forced DNS server, but I'm not placing any bets.)

For a general treatment of solving this problem, check the Linix HOWTOs about clustering and high availability. The fact that you have a proxy server added to the equation doesn't change matters much.

One way of doing this is via use of ipvsadm with non-persistent connections. I won't go into *great* detail here as my familiarity with ipvsadm is limited; essentially you need to set up a *non-persistent* cluster on the proxy server pointing to the members of your cluster.

I believe the following will work, but you should read upon ipvsadm (and more generally on Linux clustering) so you know what you're doing. In particular you may prefer a different cluster selection algorithm than wlc.

ipvsadm -A -t 127.0.0.1:http -s wlc
ipvsadm -a -t 127.0.0.1:http -r 192.168.88.1 -w 1
ipvsadm -a -t 127.0.0.1:http -r 192.168.88.2 -w 1

This setup requires that you *not* be running any services on 127.0.0.1:http (including on *:http) as such connections will take priority. In particular, squid *itself* must *not* be listening on the IP specified by the "-t" argument. "netstat -an" will tell you what ports are being listened on, or "lsof -i :80" will list programs and ports.

With this setup your DNS resolution (via /etc/hosts or your DNS server) resolves to localhost, or at least to the IP specified in the "-t" argument for ipvsadm. It's also possible to use the ipvsadm heartbeat functionality to give yourself some failover redundancy.

Your squid acls would look pretty much the same as in the first interpretation, by the way.

If someone knowing ipvsadm better than I do wants to correct that material, please feel free. I've only used ipvsadm myself in a persistent context. In particular it *may* be necessary to have ipvsadm running on a separate box to avoid port conflicts... but that's the sort of application for which virtualisation presents an ideal solution. :-)

...Ronny

haariseshu 02-04-2008 05:04 AM

Thank "U"
 
Friend,
Requirement: There should be two webservers behind the squid which is acting as a reverse proxy. My squid should able to retrive the contents in both the servers. I'm saying in the sense my squid should utilize both the servers.
The server may have a similar or different contents.
My configuration file: (squid.conf):

http_port 151.2.119.31:80
cache_peer 151.2.119.30 parent 80 0 default
cache_peer 151.2.119.32 parent 80 0 default
acl all src 0.0.0.0/0.0.0.0
acl localserv dst 151.2.119.0/255.255.255.0
acl manager proto cache_object
acl Safe_ports port 80
acl test_clients src 151.2.0.0/255.255.0.0
acl business_hours time M T W H F 9:00-23:00
http_access allow manager localhost
http_access allow test_clients business_hours
http_access allow localserv Safe_ports
http_access deny CONNECT !SSL_ports
httpd_accel_host 151.2.119.30
httpd_accel_port 80
httpd_accel_single_host off

\\i'm not mentioning all the contents over acl lists. this wt i'm thinking important.

Current status
Now my squid listening to 119.30 server as given in http_accel_host. If service in .30 is not avail my squid can't do anything. Only error is the output.

Even one server is not present my squid should able to get the things from the second server. How can I specify this in my config file.
Actually i'm doing this for my project work in my company. At first when i'm entering to squid it seems like a ocean (still). Thank you once again for ur interest to help me...

ronny 02-05-2008 05:59 AM

Quote:

Originally Posted by haariseshu (Post 3045410)
Friend,
Requirement: There should be two webservers behind the squid which is acting as a reverse proxy. My squid should able to retrive the contents in both the servers. I'm saying in the sense my squid should utilize both the servers.
The server may have a similar or different contents.
My configuration file: (squid.conf):

http_port 151.2.119.31:80
cache_peer 151.2.119.30 parent 80 0 default
cache_peer 151.2.119.32 parent 80 0 default

Here's part of your problem. cache_peer is for peering with other squid servers - these servers are not running squid, presumably.

Keep in mind that I haven't played with the squid reverse proxy much, so I may be speaking from ignorance here. The peering functionality is there for load sharing, not for high availability - in other words, given a number of defined peers, the system will try to consult one, and if it fails will attempt to retrieve the server content itself. It won't try every single peer - otherwise a system with ten peers would take forever to answer a single request.

It seems to me you're trying to apply a square peg to a round hole. One of the great strengths of Linux, and of Unix type systems in general, is that they tend to give you a set of simpler tools that can be combined in novel ways, rather than a single monolithic tool designed to do everything.

Your goal is to have a reverse proxy for caching of content, but if one of the available servers goes down the system must recognise the outage and start ignoring the problem server pretty much transparently. squid is designed for reverse proxy applications, but it isn't designed for transparent failover. There are other tools which can handle that part (including ipvsadm, although I believe there are others around). By combining the two tools you can achieve your goal; the basic techniques were in the second part of my earlier email.

Another way to handle this sort of failure handling is brute force - have a script running every minute, which attempts a wget against each server, and if a server fails to respond reloads squid with a configuration file excluding the problem system. I confess I have used this sort of solution myself (as my work environment uses Windows servers on a Linux cluster server, so the heartbeat functionality used by ipvsadm cannot be used.) However if you're using Linux servers the ipvsadm heartbeat functionality is far more elegant.

The Linux High Availability HOWTO is a decade old now, but there is a web site which collects Linux HA resources: http://www.linux-ha.org/ may be worth a look.

If you want to solve the problem solely with squid, I can't help you. It may be possible, but I don't know how.

To summarise the combined solution: set up an ipvsadm cluster (as advised earlier) to point to all of your internal servers. Point the internal address of your reverse proxy at the serving IP of the ipvsadm (a single host IP/port). This should solve your problem. Combined with the heartbeat functionality of ipvsadm, it will also handle automated failover for you.

If you're lucky a squid guru will interject and prove me wrong, but squid gurus are thin on the ground.

...Ronny


All times are GMT -5. The time now is 04:34 PM.