简体   繁体   中英

Allowing Intra-Cluster Communication with nginx in kubernetes

I am facing a problem with my current k8s setup. In production, I spin up three replicas of each of our services and put them in a pod. When the pods speak to each other, we would like the pods to speak to each container in the pod in a round-robin fashion. Unfortunately, the connection between pods is never terminated thanks to TLS keep alive - and we don't want to change that part specifically - but we do want to have each container in a pod communicate properly. This is sort of what we have now:

服务如何谈话

If the API is trying to talk to, say, pod OSS, it will talk to the first container only. I want API to be able to talk to all three in a round-robin fashion.

How do I do this? I understand that I will need an Ingress Controller, like nginx. But is there some real tutorial that breaks down how I can achieve this? I am unsure and somewhat new to k8s. Any help would be appeciated!

By the way, I am working locally on minikube.

Edit:

In production, we spin up three replicas of each service. When service A needs to speak to service B , a pod B1 from service B is selected and manages whatever request it receives. However, that pod B1 becomes the only pod from service B that handles any communication; in other words, pods B2 and B3 are never spoken to. I am trying to solve this problem with nginx because it seems like we need a load balancer to help with this, but I'm not sure how to do it. Can anyone provide some detailed explanation on what needs to be done? Specifically, how can I set up nginx with my services so that all pods are used in a service (in some round-robin fashion), unlike what is happening now where only one pod is used? This is a problem because in production, the one pod gets overloaded with requests and dies when we have two other pods sitting there doing nothing. I'm developing locally on minikube.

I'm assuming that you have a microservice architecture underneath your pods, right? Have you considered the use of Istio with Kubernetes? It's open sourced and developed by Google, IBM and Lyft -- intention is to give developers a vendor-neutral way (which seems to be what you are looking for) to connect, secure, manage, and monitor networks of different microservices on cloud platforms (AWS, Azure, Google, etc).

At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio's diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

This is the link to Istio's documentation , explaining how to set up a multi cluster environment in details, which is what you are looking for.

There's a note in the documentation that I would like to highlight -- it may be related to your issue:

Since Kubernetes pods don't have stable IPs, restart of any Istio service pod in the control plane cluster will cause its endpoint to be changed. Therefore, any connection made from remote clusters to that endpoint will be broken. This is documented in Istio issue #4822 .

There are a number of ways to either avoid or resolve this scenario. This section provides a high level overview of these options.

  • Update the DNS entries
  • Use a load balancer service type
  • Expose the Istio services via a gateway

I'm quoting the load balancer solution, since it seems to be what you want:

In Kubernetes, you can declare a service with a service type to be LoadBalancer . A simple solution to the pod restart issue is to use load balancers for the Istio services. You can then use the load balancer IPs as the Istio services's endpoint IPs to configure the remote clusters.

I hope it helps, and if you have any question, shoot!

A very simple example of how to balance your backend pods using Kubernetes service is mentioned here

Your replicas should be managed itself by kubernetes as mentioned in the link ie by create your pods somewhat like mentioned in below example and then follow steps to create the service pointing to these pods

kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0  --port=8080

By doing this, kubernetes will ensure the load is distributed evenly among all your running pods.

In your case, you might want to look at the way you created your pods and services. One way to be sure that you have got your services correctly setup is by running below command , the result should give you multiple ENDPOINTS ie set of : pairs pointing to your individual replica pods, something like in the example displayed below.

kubectl get endpoints --all-namespaces

NAMESPACE     NAME                      ENDPOINTS                                                  AGE
kube-system   kube-dns                  10.244.0.96:53,10.244.0.97:53,10.244.0.96:53 + 1 more...   1d

Well , if you are really interested in setting an nginx ingress , this would be a good start. But, a simple LoadBalancer within kubernetes service should suffice your current requirement

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM