简体   繁体   English

在私有 GKE 集群中,为每个 pod 实现专用公共 IP 作为传出流量的源 IP

[英]In Private GKE Cluster achieve dedicated public IP as source IP for each pod for outgoing traffic

Requirement : With private GKE ( version : 1.21.11-gke.1100 ), each pod is required to have a dedicated public IP as source IP when reaching to internet.要求:使用私有 GKE(版本: 1.21.11-gke.1100 ),每个 pod 在访问 Internet 时都需要有一个专用的公共 IP 作为源 IP。 It is not required for ingress but only for egress.它不是入口所必需的,而只是出口所必需的。

Solution tried : Cloud NAT.尝试的解决方案:Cloud NAT。 Works partially.部分工作。 Meaning, suppose we have 10 pods and each of them is made to run on a distinct node.意思是,假设我们有 10 个 pod,每个 pod 都运行在不同的节点上。 Cloud NAT does not assign an unique IP to each pod even when the Minimum ports per VM instance is set to the maximum possible value of 57344 .即使将Minimum ports per VM instance设置为最大可能值57344 ,Cloud NAT 也不会为每个 pod 分配唯一的 IP。

Experiment Done : 10 NAT gateway IPs are assigned to the NAT Gateway.实验完成:将 10 个 NAT 网关 IP 分配给 NAT 网关。 8 pods are created, each running on a dedicated node.创建了 8 个 Pod,每个 Pod 在专用节点上运行。 Cloud NAT assigned only 3 Cloud NAT IPs instead of 8 even though there aee 10 IPs available. Cloud NAT 仅分配了 3 个 Cloud NAT IP,而不是 8 个,即使有 10 个 IP 可用。

Cloud NAT is configured as below : Cloud NAT 配置如下:

Configuration配置 Setting环境
Manual NAT IP address assignment true真的
Dynamic port allocation disabled
Minimum ports per VM instance 57344 . 57344 This decides how many VMs can be assigned to the same Cloud NAT IP.这决定了可以将多少虚拟机分配给同一个 Cloud NAT IP。
Endpoint-Independent Mapping disabled

Instead of converting to a Public GKE cluster , is there an easier way of achieving this goal?除了转换为 Public GKE 集群,有没有更简单的方法来实现这个目标?

Has anyone ever done such a setup which is proved to work?有没有人做过这样一个被证明有效的设置?

You can create the NAT gateway instance and forward the traffic from there.您可以创建NAT 网关实例并从那里转发流量。

Here terraform script to create : https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples这里要创建 terraform 脚本: https ://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples

https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e

If you are looking to use cloud NAT with route you can checkout this : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#private-clusters如果您希望将云 NAT 与路由一起使用,您可以查看: https ://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#private-clusters

TF code for NAT : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/terraform/network.tf#L84用于 NAT 的 TF 代码: https ://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/terraform/network.tf#L84

Demo architecture : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#demo-architecture演示架构: https ://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#demo-architecture

That's expected behavior because that's what NAT does.这是预期的行为,因为这就是 NAT 所做的。 Network Address Translation will always hide the Private IP Address of whatever is behind it (In this case a Pod or Node IP) And will forward traffic to the Internet using the Public NAT IP.网络地址转换将始终隐藏其背后的私有 IP 地址(在本例中为 Pod 或节点 IP),并将使用公共 NAT IP 将流量转发到 Internet。 Return traffic goes back to the Public NAT IP which knows to where Pod route the traffic back.返回流量返回到公共 NAT IP,该 IP 知道 Pod 将流量路由回哪里。

In other terms you have no ways using Managed Cloud NAT to ensure each pod in your cluster will get a Unique Public IP on Egress.换句话说,您无法使用托管云 NAT 来确保集群中的每个 pod 都将在 Egress 上获得唯一的公共 IP。

The only i can see to solve this is to:我能看到解决这个问题的唯一方法是:

  • Create a Public GKE cluster with 10 nodes (following your example) and using taints, tolerations and node selector run each pod on a dedicated node, this way when the pod Egress to the internet, it will use the Node Public IP.创建一个具有 10 个节点的公共 GKE 集群(按照您的示例)并使用 taints、tolerations 和节点选择器在专用节点上运行每个 pod,这样当 pod 出口到互联网时,它将使用节点公共 IP。
  • Create a Multi-NIC GCE instance, deploy some proxy on it (HA proxy for example) and configure it to somewhere route Egress traffic using one of the Interfaces for each of the pods behind (Note that a multi-Nic node can only have 8 Interfaces).创建一个多网卡 GCE 实例,在其上部署一些代理(例如 HA 代理)并将其配置为使用后面每个 pod 的一个接口路由出口流量(请注意,多网卡节点只能有 8 个接口)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM