LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 05-09-2008, 02:30 PM   #1
Kiwisnow
LQ Newbie
 
Registered: May 2008
Posts: 4

Rep: Reputation: 0
Trouble with load balancer (piranha)


I started working with Linux 4 days ago, so please pardon any newb-ness on my part.

I'm trying to set up a load balancer right now, but I'm having trouble getting it to actually load balance. I'm currently testing everything via VMs atm.

IP addresses
Director: 10.10.10.204
Backup: 10.10.10.215
Real Server 1: 10.10.10.247
Real Server 2: 10.10.10.248
VIP I want to use: 10.10.10.193

My operating system is CentOS 5.1, kernel 2.6.18-53.1.13.el5

I have apache set up on the two real servers for testing purposes. Yesterday, I was able to get it working - typing 10.10.10.193 into a browser took me to Real Server 1 (10.10.10.247).

Today, I tried adding in the second real server. Now, it seems as though 10.10.10.193 will ONLY take me to 10.10.10.248. I have it set up on Piranha to use round robin, but it's not working.

I've tried looking at tail -f /var/log/messages, and I noticed that when I start pulse, I'm not getting the message "gratuitous lvs arps finished"

Instead, I get this

Quote:
May 8 06:45:38 localhost pulse[12248]: STARTING PULSE AS MASTER
May 8 06:45:40 localhost pulse[12248]: backup inactive: activating lvs
May 8 06:45:40 localhost lvs[12250]: starting virtual service VIP active: 80
May 8 06:45:40 localhost nanny[12255]: starting LVS client monitor for 10.10.10.193:80
May 8 06:45:40 localhost lvs[12250]: create_monitor for VIP/Two_One running as pid 12255
May 8 06:45:40 localhost nanny[12256]: starting LVS client monitor for 10.10.10.193:80
May 8 06:45:40 localhost lvs[12250]: create_monitor for VIP/Two_Two running as pid 12256
May 8 06:45:40 localhost nanny[12255]: making 10.10.10.247:80 available
May 8 06:45:40 localhost nanny[12256]: making 10.10.10.248:80 available
May 8 06:45:40 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.193 on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Withdrawing address record for 10.10.10.193 on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.193 on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Withdrawing address record for fe80::20c:29ff:fe16:e71f on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Withdrawing address record for 10.10.10.204 on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Host name conflict, retrying with <localhost-251>
May 8 06:45:40 localhost avahi-daemon[2133]: Registering new address record for fe80::20c:29ff:fe16:e71f on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.193 on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.204 on eth0.
May 8 06:45:40 localhost avahi-daemon[2133]: Registering HINFO record with values 'I686'/'LINUX'.
May 8 06:45:41 localhost avahi-daemon[2133]: Withdrawing address record for fe80::20c:29ff:fe16:e71f on eth0.
May 8 06:45:41 localhost avahi-daemon[2133]: Withdrawing address record for 10.10.10.204 on eth0.
May 8 06:45:41 localhost avahi-daemon[2133]: Host name conflict, retrying with <localhost-252>
May 8 06:45:41 localhost avahi-daemon[2133]: Registering new address record for fe80::20c:29ff:fe16:e71f on eth0.
May 8 06:45:41 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.193 on eth0.
May 8 06:45:41 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.204 on eth0.
May 8 06:45:41 localhost avahi-daemon[2133]: Registering HINFO record with values 'I686'/'LINUX'.
May 8 06:45:42 localhost avahi-daemon[2133]: Withdrawing address record for fe80::20c:29ff:fe16:e71f on eth0.
May 8 06:45:42 localhost avahi-daemon[2133]: Withdrawing address record for 10.10.10.204 on eth0.
May 8 06:45:42 localhost avahi-daemon[2133]: Host name conflict, retrying with <localhost-253>
May 8 06:45:42 localhost avahi-daemon[2133]: Registering new address record for fe80::20c:29ff:fe16:e71f on eth0.
May 8 06:45:42 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.193 on eth0.
May 8 06:45:42 localhost avahi-daemon[2133]: Registering new address record for 10.10.10.204 on eth0.
May 8 06:45:42 localhost avahi-daemon[2133]: Registering HINFO record with values 'I686'/'LINUX'.
And this just keeps going and going and going. I tried searching up online, but it doesn't seem as though anybody has had the same problem as me.

Here's my lvs.cf file
Quote:
serial_no = 51
primary = 10.10.10.204
service = lvs
backup_active = 1
backup = 10.10.10.215
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = direct
debug_level = NONE
monitor_links = 0
virtual VIP {
active = 1
address = 10.10.10.193 eth0:1
vip_nmask = 255.255.255.0
port = 80
send = "GET / HTTP/1.0\r\n\r\n"
expect = "HTTP"
use_regex = 0
load_monitor = none
scheduler = rr
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server Two_One {
address = 10.10.10.247
active = 1
weight = 1
}
server Two_Two {
address = 10.10.10.248
active = 1
weight = 1
}
}
I followed the instructions from the the centos.org site for setting up a LVS, but I also tinkered a bit with everything when it didn't work for me.

Please help This problem has maketh me very sad.
 
Old 02-23-2010, 11:10 AM   #2
cyrus.kapadia
LQ Newbie
 
Registered: Feb 2010
Posts: 1

Rep: Reputation: 0
Hello,

I am having the exact same problem. Did you ever fix it?
 
Old 02-23-2010, 03:15 PM   #3
Rush_898
Member
 
Registered: Mar 2004
Distribution: debian...
Posts: 31

Rep: Reputation: 16
Kiwisnow:

You say it will only take you to the .248 address now, what happens when you take .248 out of the pool? Does it finally go to .247?

A few other things, I have been running several LVS loadbalancers in a production environment for years and I have not had good luck with piranha. Last I knew it was a dead package, but I have no idea if that is still true. My advice, though I generally don't like 'use this instead' posts, is to try Keepalived or Heartbeat/ldirectord...I would personally use Keepalived if all you are doing is LVS stuff. Also, look into the userland configuration utility for LVS called ipvsadm ...you can configure everything from the CLI and see if piranha is screwing you or if your setup just isn't working. Also, ipvsadm will let you see where the loads are being distributed and what the real server weights are etc.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Load balancer mahmoud Linux - Server 9 04-24-2008 07:01 AM
suggestion to own a load balancer dianarani Linux - Server 4 06-18-2007 02:02 AM
load balancer ? spx2 Linux - Networking 4 05-29-2007 02:00 AM
Load Balancer shane200_ Linux - Networking 1 09-16-2005 02:17 PM
Load Balancer shane200_ Mandriva 1 05-12-2005 11:59 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 11:56 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration