简体   繁体   English

确定性地连接到K8S服务的云内部IP或其底层端点?

[英]Deterministic connection to cloud-internal IP of K8S service or its underlying endpoint?

I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster. 我在GKE中有一个Kubernetes集群(1.3.2),我想从与该集群共享同一网络的Google项目中连接虚拟机和服务。

Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP? 有没有一种方法可以使子网内部而不是群集本身内部的VM连接到服务,而无需访问外部IP?

I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously). 我知道您可以做很多事情来明确确定服务的IP和端口,例如ENV和DNS ...,但是(显然)群集IP无法在群集外部访问。

Is there something I'm missing? 有什么我想念的吗? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). 一个重要的组成部分是,这意味着该服务是项目的“公共”服务,因此我不知道项目中的哪些VM将要连接到该服务(这可能会排除loadBalancerSourceRanges)。 I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service. 我知道服务实际上包装的终结点是我可以访问的内部IP,但是获得该IP的唯一好方法是通过Kube API或kubectl,这两种都不是实现我的服务的理想方法。

Check out my more thorough answer here , but the most common solution to this is to create bastion routes in your GCP project. 在这里查看我更详尽的答案,但是最常见的解决方案是在GCP项目中创建堡垒路线。

In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. 以最简单的形式,您可以创建一个GCE路由,以将群集服务IP范围内的所有流量(带dest_ip)定向到您的GKE节点之一上。 If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them. 如果SPOF吓到您,您可以创建指向不同节点的多个路由,流量将在它们之间循环。

If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time. 如果您不想做这些管理开销,则可以在GKE集群中编写一个简单的控制器来监视Nodes API端点,并确保在任何给定条件下至少有到N个节点的实时堡垒路由时间。

GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services. GCP内部负载平衡只是作为Alpha发布,因此将来,可以使用GCP来在GCP上实现kube-proxy,从而消除了使用堡垒路由处理内部服务的需求。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM