Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I went thru other threads and nothing seems to be like this. I am cracking my head over this.
I have set some Iptables in the load balancer (LVS) and the problem is, it is blocking my internal traffic. My internal traffic is at 172.31.13.xx and 10.103.xx.x... 83.x.x.x is the public address
The load balancer is configured to accept calls from outside the cluster and balances the load to one of the 3 app servers.
However from the webserver we have to call a url on port 8880 which we also want to go via the load balancer. e.g. if we always would connect to app server 1 and app server 1 is down, there is a problem. but somehow the call never reaches the app server, so somewhere on the load balancer the call is blocked.
when I call the url from outside the cluster it works fine.
I am thinking its the iptables...
help help
# Generated by iptables-save v1.3.5 on Thu Dec 16 10:40:00 2010
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [104956494:7410857183]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -d 172.30.232.135 -p tcp -m tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -d 172.30.232.136 -p tcp -m tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -s 83.96.144.9 -p tcp -m tcp --dport 3306 -j ACCEPT
-A RH-Firewall-1-INPUT -s 10.103.0.0/255.255.0.0 -j ACCEPT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -p esp -j ACCEPT
-A RH-Firewall-1-INPUT -p ah -j ACCEPT
-A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2675 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2135 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2136 -j ACCEPT
-A RH-Firewall-1-INPUT -s 10.103.0.0/255.255.0.0 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
# Allow port 80 only when accessed from the cluster
# -A RH-Firewall-1-INPUT -s 10.103.0.0/255.255.0.0 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
# -A RH-Firewall-1-INPUT -s 10.103.0.0/255.255.0.0 -p tcp -m state --state NEW -m tcp --dport 8880 -j ACCEPT
# Only allow port 80 from outside the cluster to the web server
-A RH-Firewall-1-INPUT -d 195.88.18.13 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A RH-Firewall-1-INPUT -d 195.88.18.12 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A RH-Firewall-1-INPUT -d 195.88.18.11 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A RH-Firewall-1-INPUT -d 195.88.18.8 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
# -A RH-Firewall-1-INPUT -d 195.88.18.8 -p tcp -m state --state NEW -m tcp --dport 8880 -j ACCEPT
# MySQL access from outside the cluster to the reporting server
-A RH-Firewall-1-INPUT -d 195.88.18.13 -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT
# -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A RH-Firewall-1-INPUT -d 10.103.4.40 -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Thu Dec 16 10:40:00 2010
# Generated by iptables-save v1.3.5 on Thu Dec 16 10:40:00 2010
*nat
:PREROUTING ACCEPT [58628481:16860954476]
:POSTROUTING ACCEPT [133127:10110394]
:OUTPUT ACCEPT [34649975:2081168594]
-A PREROUTING -d 195.88.18.200 -i bond0.18 -p tcp -j DNAT --to-destination 10.103.4.40
-A POSTROUTING -s 10.103.0.0/255.255.0.0 -j MASQUERADE
COMMIT
# Completed on Thu Dec 16 10:40:00 2010
ip link show
ip addr show
ip route show
iptables -L -nv
Then, you say the traffic is not being forwarded by the load balancer. Can you start tcpdump on the interface that's receiving the requests and show us the output when some traffic is sent to the load balancer? Another tcpdump session on the interface that's pointing to the real servers at the same time and showing us the output when the requests are made would be nice.
Oh.... I think the value in net.ipv4.ip_forward doesn't matter. Traffic is forwarded from the application layer, not ip. By the way, don't the requests that land at the vls process get reported about on a log?
Currently I have configured in ipvsadm that from external ip (VIP) 195.88.18.8 port 8880 will be load balanced to the app servers. In keepalived.conf I configured the same. Instead of using 195.89.18.8 I also tried 10.104.0.1 which is the internal IP of the load balancer. Reason for that is that from the web server (10.103.6.1 we want to reach the app servers on port 8880. There is no need for external clients to reach the app servers.
One thought I had on that was that in keepalived you also need to specify the virtual ips. I tried that for 10.103.0.1 but no luck. So maybe you have some thoughts on that...
note :195.89.x.x is the external ip
10.104 is the internal ip of the load balancer
Could it because of the keepalived... since there is virtual ips being inserted, there is a possibility that somehwere along the line, there is an error....
When you are running your tests, what is the originating IP address and the receiving IP address on the load balancing server? (you have a little complicated set up :-)).
When i run the tests, I run from the webserver we have to call a url on port 8880 which I also want to go via the load balancer.
e.g. if we always would connect to app server 1 and app server 1 is down, there is a problem. but somehow the call never reaches the app server, so somewhere on the load balancer the call is blocked.
the ips of the webservers (app server): 10.103.1.1-10.103.1.4
[root@lb-01 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address Stat e PID/Program name
tcp 0 0 127.0.0.1:199 0.0.0.0:* LIST EN 19821/snmpd
tcp 0 0 10.103.0.1:53 0.0.0.0:* LIST EN 30909/dnsmasq
tcp 0 0 10.103.0.1:22 0.0.0.0:* LIST EN 26948/sshd
tcp 0 0 :::1311 :::* LIST EN 31795/dsm_om_connsv
[root@lb-01 ~]#
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.