Linux - ContainersThis forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi,
For one container everything is clear (connect to kafka or other services), but now I scale it up to 5 containers, here is my question, how traffic is load balancing between containers ?
There are many container technologies, and many container load balancing technologies. You need to provide some information about your configuration. For example, are you using Linux Containers? Docker? Virtuozzo? Proxmox? Kubernetes? Mesos? Docker Swarm?
A few other question you may want to address: How is the application implemented? How do you scale up? What does "load" mean in the context of your application?
I used HPA , like this
kubectl autoscale deployment bserver --cpu-percent=50 --min=1 --max=10
and it works, under the stress, here I wanted to know how the Kube balance traffic among the new containers, and is there any other approach for load balancing?
Last edited by hesisaboury; 01-30-2021 at 12:47 PM.
I'm using service, plus autoscaling. I defined a service for one of my microservices and I wanted in case of high load, it scales up to 5. so others still see that microservice through that service I defined, my question is : how traffic is load balancing among those 5 containers? and is there any other way to change it.
I'm not sure, but it seems kube-proxy is load balancing among containers (L4 round robin ) and also there is no way to change it
Last edited by hesisaboury; 01-31-2021 at 02:13 AM.
It's not possible to help you if you don't provide information about your setup. If, for example, the service is of type loadbalancer, it uses load balancing functionality of the cloud in which the K8s cluster is running.
You may also have defined an ingress, which also performs load balancing.
As suggested by berndbausch you can use any nginx ingress, istio, linkerd or traefik.
Let us know if you have K8s cluster or its a minikube/microk8s
If you want to know more about Services, Load Balancing, and Networking refer https://kubernetes.io/docs/concepts/...es-networking/
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.