简体   繁体   English

在Kubernetes中路由内部流量?

[英]Routing internal traffic in Kubernetes?

We presently have a setup where applications within our mesos/marathon cluster want to reach out to services which may or may not reside in our mesos/marathon cluster. 当前,我们有一个设置,其中mesos / marathon集群中的应用程序希望与可能驻留在mesos / marathon集群中的服务建立联系,也可以不在其中。 Ingress for external traffic into the cluster is accomplished via an Amazon ELB sitting in front of a cluster of Traefik instances, which then chooses the appropriate set of container instances to load-balance to via the incoming HTTP Host header compared against essentially a many-to-one association of configured host headers against a particular container instance. 通过位于Traefik实例群集前面的Amazon ELB可以将外部流量导入群集,然后,该Amazon ELB选择适当的容器实例集以通过传入的HTTP Host标头进行负载平衡,而实际上与之相比,多-与特定容器实例相关联的已配置主机头的一种关联。 Internal-to-internal traffic is actually handled by this same route as well, as the DNS record that is associated with a given service is mapped to that same ELB both internal to and external to our mesos/marathon cluster. 内部到内部的流量实际上也由相同的路由处理,因为与给定服务关联的DNS记录被映射到我们的mesos / marathon群集内部和外部的同一ELB。 We also give the ability to have multiple DNS records pointing against the same container set. 我们还具有指向同一容器集的多个DNS记录的功能。

This setup works, but causes seemingly unnecessary network traffic and load against our ELBs as well as our Traefik cluster, as if the applications in the containers or another component were able to self-determine that the services they wished to call out to were within the specific mesos/marathon cluster they were in, and make an appropriate call to either something internal to the cluster fronting the set of containers, or directly to the specific container itself. 此设置可以正常运行,但会导致不必要的网络流量和对我们的ELB以及Traefik集群的负载,就像容器或其他组件中的应用程序能够自行确定他们希望调出的服务位于内部。他们所处的特定mesos / marathon集群,并对该集群内部的一组容器进行适当的调用,或者直接调用特定的容器本身。

From what I understand of Kubernetes, Kubernetes provides the concept of services, which essentially can act as the front for a set of pods based on configuration for which pods the service should match over. 根据我对Kubernetes的了解,Kubernetes提供了服务的概念,它实际上可以根据服务应匹配的Pod的配置充当一组Pod的前端。 However, I'm not entirely sure of the mechanism by which we can have applications in a Kubernetes cluster know transparently to direct network traffic to the service IPs. 但是,我不确定如何在Kubernetes集群中让应用程序透明地知道将网络流量定向到服务IP的机制。 I think that some of this can be helped by having Envoy proxy traffic meant for, eg, <application-name>.<cluster-name>.company.com to the service name, but if we have a CNAME that maps to that previous DNS entry (say, <application-name>.company.com ), I'm not entirely sure how we can avoid exiting the cluster. 认为 ,通过将Envoy代理流量用于(例如,将<application-name>.<cluster-name>.company.com转换为服务名称)可以使其中的某些服务有所帮助,但是如果我们有一个CNAME可以映射到先前的DNS条目(例如, <application-name>.company.com ),我不确定如何避免退出群集。

Is there a good way to solve for both cases? 是否有解决这两种情况的好方法? We are trying to avoid having our applications' logic have to understand that it's sitting in a particular cluster and would prefer a component outside of the applications to perform the routing appropriately. 我们试图避免让我们的应用程序逻辑必须了解它位于特定的群集中,并且希望使用应用程序外部的组件来适当地执行路由。

If I am fundamentally misunderstanding a particular component, I would gladly appreciate correction! 如果我从根本上误解了某个特定的组件,我将不胜感激更正!

When you are using service-to-service communication inside a cluster, you are using Service abstraction which is something like a static point which will road traffic to the right pods. 当您在集群内部使用服务到服务的通信时,您使用的是Service抽象,就像静态点一样,它将流量引向正确的Pod。

Service endpoint available only from inside a cluster by it's IP or internal DNS name, provided by internal Kubernetes DNS server. 服务端点只能从群集内部通过其IP或内部DNS名称(由内部Kubernetes DNS服务器提供)使用。 So, for communicating inside a cluster, you can use DNS names like <servicename>.<namespace>.svc.cluster.local . 因此,为了在群集内部进行通信,可以使用DNS名称,例如<servicename>.<namespace>.svc.cluster.local

But, what is more important, Service has a static IP address . 但是,更重要的是,服务具有静态IP地址

So, now you can add that static IP as a hosts record to the pods inside a cluster for making sure that they will communicate each other inside a cluster. 因此,现在您可以将该静态IP作为hosts记录添加到群集内的Pod,以确保它们将在群集内相互通信。

For that, you can use HostAlias feature. 为此,您可以使用HostAlias功能。 Here is an example of configuration: 这是配置示例:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "10.0.1.23"
    hostnames:
    - "my.first.internal.service.example.com"
  - ip: "10.1.2.3"
    hostnames:
    - "my.second.internal.service.example.com"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

So, if you will use your internal Service IP in combination with service's public FQDN, all traffic from your pod will be 100% inside a cluster, because the application will use internal IP address. 因此,如果您将内部服务IP与服务的公共FQDN结合使用,则来自该pod的所有流量都将在群集内100%,因为该应用程序将使用内部IP地址。

Also, you can use upstream DNS server which will contain same aliases, but an idea will be the same. 此外,您可以使用上游DNS服务器,该服务器将包含相同的别名,但是思路是相同的。 With Upstream DNS for the separate zone, resolving will work like that: 对于单独区域的上游DNS,解析将按以下方式进行: 使用上游DNS解析区域

With a new version of Kubernetes, which using Core DSN for providing DNS service, and has more features it will be a bit simpler. 使用新版本的Kubernetes,该版本使用Core DSN提供DNS服务,并具有更多功能,它将变得更加简单。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 将流量路由到kubernetes集群 - Routing traffic to kubernetes cluster GKE Kubernetes Ingress 未将流量路由到微服务 - GKE Kubernetes Ingress not routing traffic to microservices Kubernetes 入口未将流量路由到后端端口 - Kubernetes Ingress not routing traffic to backend port 可靠地将流量路由到裸机 kube.netes 集群 - Reliably routing traffic to bare metal kubernetes cluster 使用自定义网关可以进行 ISTIO 内部流量路由吗? - Is ISTIO internal traffic routing possible with custom gateway? Kubernetes 运营商中的流量路由如何发生 - 出口流量之旅 - How traffic routing happens in Kubernetes operator - Journey of Egress traffic 将内部Kubernetes IP地址路由到主机系统 - Routing an internal Kubernetes IP address to the host system 将流量路由到AWS EC2上的公共Kubernetes服务 - routing traffic to a public Kubernetes service on AWS EC2 使用 Traefik Ingress Controller 将流量路由到外部 Kubernetes - Routing traffic to outside Kubernetes using Traefik Ingress Controller 在 EKS Fargate 上使用内部 MTLS 身份验证启用到 Kubernetes pod 的 https 流量 - Enable https traffic to Kubernetes pod with internal MTLS auth on EKS Fargate
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM