Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am trying to make a new cluster using two servers and just would like to make sure I m not missing any steps.
I got the servers,
Installed centos version 5
Yum the cluster package
Installed luci- set up luci and set up the cluster
Installed pirAnha and set it up too
That's for clustering
I then set up my iptables to act as a fw since I have 3 servers under me.
The question is: would this be enough to produce a running server?
Someone said I do not need to install keepalived with this configuration . True?
Do I need to get ipvsadm running too?
I am not sure exactly what you are trying to do, you say you are making a cluster, but I think you are making a pair of load balancers to sit in front of a farm of server.
If you are trying to create a pair of firewall/load balancers I have done this multiple times with heartbeat, heartbeat-ldirector, and iptables.
Not that heartbeat-ldirector will use ipvsadm and will help you configure multiple internal or external IPs to run your cluster.
With a little more information I might be able to help you.
Thats right. There is a pair of load balancers which i would like to have HA, and round robin LB. And they are in front of 4 servers (application servers).
The main question would be, are the above sufficient? Or do i need to install more?
I install heartbeat and ldiretord, nothing else is needed.
I set them up like so:
External IPs (internet accessible):
load1 physical IP = XXX.XXX.XXX.XXX (This is a static IP from your ISP/colo assigned to load1 that never changes (this is how you ssh into load1))
load2 physical IP = XXX.XXX.XXX.XXX (Same as the load1 external ip)
virtual service IP = XXX.XXX.XXX.XXX (This is a 3rd ip that will be moved back and forth between load1 and load2 if anything goes wrong)
Internal IPs:
load1 physical IP = 172.16.16.11 (This is a static IP assigned to load1 that never changes (this is how you ssh into load1 fron inside the network)
load2 physical IP = 172.16.16.12 (Same as the load1 internal ip)
gateway IP = 172.16.16.1 (This is the IP that your farm has to use as its gateway so all their answers go back to the LBs) (it will shift between load1 and load2 at the same time as the external IP)
Ldirector:
Set this up in masq mode to pass all traffic through your LBs so that the traffic from the farms is always on a private network and the LBs are your firewall and the only servers that have internet IPs directly.
That is it. You have a farm. As to keeping the data on the farm's HDs in sync and how to deal with sessions, etc. that is a different problem.
BTW: I like to set up 3 networks using 4 switches:
2 external switches (red network) with bonded NICs in mode 6 from each LB to one of each of the switches, then these can be connected to your ISP/co-lo with HSRP or similar so if a switch dies the farm keeps going. (make sure the switches support spanning tree, I like the LINKSYS SRW224G4 if you don't need more than 10/100)
Then 2 gigabit switches for the yellow and green networks. For these I use a vlan so ports 1-12 are yellow and 13-24 are green, since you are connecting the 2 vlans together with 2 network cables to prevent a single point of failure, you can attach up to 10 bonded NIC servers and if either switch fails your network will keep running without missing a beat. I like the Netgear GS724T for these.
Now you have a very high availability network to go with your high availability cluster.
Note: here are the 3 networks ad what they are for.
Red: This is all interfaces between your gear and the internet (load balancers, KVM over IP devices, remote PDU devices etc)
Yellow: All network communication between your servers and the load balancers (this is where the LB's internal IP attaches)
Green: All traffic that can not access the internet (like the traffic between your farm and a DB server, nfs mount, NIS+ traffic, internal DNS, internal NTP, etc.) because this traffic can not be accessed from red or yellow (they are physically separate) it is more secure. You should never fully trust any traffic and should always secure all network interfaces, but this is much safer.
I am currently using a combination of Keepalived and LVS.
I just installed piranha (which does the lvs.cf). I installed the iptables too. And plan to install IPVS.
Question: Can i start/install all these, while still running on keepalived/LVS? or do i need to shutdown keepalived first before configuring the piranha cluster?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.