简体   繁体   English

Kubernetes 集群内的流量如何流动?

[英]How does Traffic Flow inside a Kubernetes Cluster?

(While learning Kubernetes I never really found any good resources explaining this) (在学习 Kubernetes 时,我从未真正找到任何解释这一点的好资源)

Scenario:设想:
I own mywebsite1.com and mywebsite2.com and I want to host them both inside a Kubernetes Cluster.我拥有 mywebsite1.com 和 mywebsite2.com,我想将它们都托管在 Kubernetes 集群中。

I deploy a generic cloud ingress controller according to the following website with 2我根据以下网站部署了一个通用的云入口控制器 2
kubectl apply -f < url > commands. kubectl apply -f < url > 命令。 (mandatory.yaml and generic ingress.yaml) (mandatory.yaml 和通用 ingress.yaml)
https://kubernetes.github.io/ingress-nginx/deploy/ https://kubernetes.github.io/ingress-nginx/deploy/

So the question is what does that architecture look like?所以问题是这种架构是什么样的? and how does the data flow into the Cluster?以及数据如何流入集群?

I convert 2 certificates to 2 .key and 2 .crt files我将 2 个证书转换为 2 个 .key 和 2 个 .crt 文件
I use those files to make 2 TLS secrets (1 for each website so they'll have HTTPS enabled)我使用这些文件制作 2 个 TLS 机密(每个网站 1 个,因此它们将启用 HTTPS)

I create 2 Ingress Objects:我创建了 2 个入口对象:

  • one that says website1.com/, points to a service called website1fe, and references website1's HTTPS/TLS certificate secret.一个说 website1.com/,指向一个名为 website1fe 的服务,并引用了 website1 的 HTTPS/TLS 证书密钥。
    (The website1fe service only listens on port 80, and forwards traffic to pods spawned by a website1fe deployment) ( website1fe 服务仅侦听端口 80,并将流量转发到由 website1fe 部署产生的 Pod)

  • the other says website2.com/, points to a service called website2fe, and references website2's HTTPS/TLS certificate secret.另一个说 website2.com/,指向一个名为 website2fe 的服务,并引用了 website2 的 HTTPS/TLS 证书密钥。
    (The website2fe service only listens on port 80, and forwards traffic to pods spawned by a website2fe deployment) ( website2fe 服务仅侦听端口 80,并将流量转发到由 website2fe 部署产生的 Pod)

I have a 3 Node Kubernetes Cluster that exists in a Private Subnet.我有一个存在于私有子网中的 3 节点 Kubernetes 集群。
They have IPs他们有IP

 10.1.1.10     10.1.1.11     10.1.1.12

When I ran the 2当我跑 2
kubectl apply -f < url > commands kubectl apply -f <url> 命令
Those commands generated:这些命令生成:

  • A kubernetes deployment containing pods running Nginx L7 LB software, that declaratively configure themselves based on Ingress .yaml objects stored in etcd, because the nginx L7 LB pods are self configuring, they're referred to as Ingress Controller Pods.包含运行 Nginx L7 LB 软件的 Pod 的 kubernetes 部署,根据存储在 etcd 中的 Ingress .yaml 对象声明性地配置自己,因为 nginx L7 LB Pod 是自配置的,它们被称为 Ingress Controller Pod。 (these nginx ingress controller pods listen on ports 80 and 443.) (这些 nginx 入口控制器 pod 侦听端口 80 和 443。)
  • A Kubernetes Service of type Load Balancer: Kubernetes Service of type Load Balancer, uses Nodeports behind the scenes, (NodePort is safe to use when the nodes have private IPs, the NodePorts randomly (Note: service type LB uses NodePorts behind the scenes and that will be picked randomly, and cloud APIs will automatically link the cloud LB to the correct random NodePort. Alternatively you can use service type NodePort and gain the option to explicitly pick the NodePort.) pick from the range of 30000 - 32767, but for clarity sake I'll say the NodePort service is listening on ports 30080 and 30443 of every node in the cluster), A Cloud LB gets auto provisioned and exists outside of the cluster with a public IP address(using default settings), and it auto routes traffic to the NodePort that the Ingress Controller is exposed on.一个负载均衡类型的Kubernetes服务:负载均衡类型的Kubernetes服务,在后台使用Nodeports,(NodePort在节点有私有IP时可以安全使用,NodePorts随机(注:服务类型LB在后台使用NodePorts,将随机选择,云 API 将自动将云 LB 链接到正确的随机 NodePort。或者,您可以使用服务类型 NodePort 并获得显式选择 NodePort 的选项。)从 30000 - 32767 的范围中选择,但为了清楚起见因为我会说 NodePort 服务正在侦听集群中每个节点的端口 30080 和 30443),一个 Cloud LB 被自动配置并存在于集群之外,具有公共 IP 地址(使用默认设置),并且它自动路由到入口控制器暴露在其上的 NodePort 的流量。 (An example of traffic flow: LB:443 --> NP:30443 --> IngressControllerPod:443 --> Grafana:3000) (流量示例:LB:443 --> NP:30443 --> IngressControllerPod:443 --> Grafana:3000)

kubectl get svc --all-namespaces kubectl get svc --all-namespaces
Gives the IPv4 IP address of the L4 LB (let's say it's the publicly routable IP 1.2.3.4)给出 L4 LB 的 IPv4 IP 地址(假设它是可公开路由的 IP 1.2.3.4)

Since I own both domains: I configure internet DNS so that website1.com and website2.com both point to 1.2.3.4由于我拥有两个域:我配置了互联网 DNS,以便 website1.com 和 website2.com 都指向 1.2.3.4

Note: The ingress controller is cloud provider aware so it automatically did the following reverse proxy/load balancing configuration:注意:入口控制器可以识别云提供商,因此它会自动执行以下反向代理/负载平衡配置:

L4LB 1.2.3.4:80 --(LB between)--> 10.1.1.10:30080, 10.1.1.11:30080, 10.1.1.12:30080
L4LB 1.2.3.4:443 --(LB between)--> 10.1.1.10:30443, 10.1.1.11:30443, 10.1.1.12:30443

KubeProxy makes it so that requests on any node's port 30080 or 30443 get forwarded inside the cluster to the Nginx L7 LB/Ingress Controller Service, which then forwards the traffic to the L7 Nginx LB Pods. KubeProxy 使任何节点的端口 30080 或 30443 上的请求在集群内部转发到 Nginx L7 LB/Ingress Controller Service,然后将流量转发到 L7 Nginx LB Pod。
The L7 Nginx LB pods terminate* the HTTPS connection and forward traffic to website1.com and website2.com services, which are listening on unencrypted port 80. L7 Nginx LB pod 终止* HTTPS 连接并将流量转发到 website1.com 和 website2.com 服务,这些服务正在侦听未加密的端口 80。
(It's ok that it's unencrypted because we're in the cluster where no one would be sniffing the traffic.) (*note sometimes the Cloud LB terminates HTTPS and then forwards to ingress controller over cleartext port 80 but this isn't so bad b/c the clear text happens over private IP space) (它没有加密是可以的,因为我们在集群中,没有人会嗅探流量。)(*注意有时 Cloud LB 终止 HTTPS,然后通过明文端口 80 转发到入口控制器,但这还不错 b /c 明文发生在私有 IP 空间上)
(The Nginx L7 LB knows which inner cluster service/website to forward to based on the L7(http://url) address that traffic is coming in on) (Nginx L7 LB 根据流量进来的 L7(http://url)地址知道要转发到哪个内部集群服务/网站)


Note a mistake to avoid: Let's say that website1.com wants to access some resources that exist on website2.com注意一个要避免的错误:假设 website1.com 想要访问一些存在于 website2.com 上的资源

Well website2.com actually has 2 IP addresses and 2 DNS names.那么 website2.com 实际上有 2 个 IP 地址和 2 个 DNS 名称。
website2fe.default.svc.cluster.local <-- inner cluster resolvable DNS address website2fe.default.svc.cluster.local <-- 集群内部可解析的DNS地址
website2.com <-- Externally resolving DNS address website2.com <-- 外部解析DNS地址

Instead of having website1 access resources via website2.com You should have website1 access resources via website2fe.default.svc.cluster.local (It's more efficient routing)而不是让 website1 通过 website2.com 访问资源你应该让 website1 通过 website2fe.default.svc.cluster.local 访问资源(这是更有效的路由)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM