简体   繁体   English

从另一个 GKE 私有集群访问私有 GKE 集群中的节点端口服务

[英]Access a Node Port Service in a private GKE Cluster from another GKE private cluster

I am using Google cloud and I have two GKE private clusters.我正在使用谷歌云,我有两个 GKE 私有集群。

One of them contains some services installed as nodePort.其中之一包含一些作为 nodePort 安装的服务。 The other cluster needs to connect to this one and access the services exposed.另一个集群需要连接到这个集群并访问暴露的服务。

The cluster with services exposed has only one node with a private IP.暴露服务的集群只有一个节点,其私有 IP。 I can ping this node with success from the another cluster using this private IP.我可以使用这个私有 IP 从另一个集群成功 ping 这个节点。

But how can I access the services?但是我怎样才能访问这些服务?

I also tried to config some firewall rules with no success.我还尝试配置一些防火墙规则,但没有成功。

Please take a look at below example which shows how to make a connection to a service ( NodePort ) between two private GKE clusters:请看下面的示例,该示例显示了如何在两个私有 GKE 集群之间建立到服务 ( NodePort ) 的连接:

This example will use two GKE clusters:此示例将使用两个 GKE 集群:

  • gke-private-cluster-main - this will be the cluster with a simple hello-app gke-private-cluster-main - 这将是一个带有简单hello-app的集群
  • gke-private-cluster-europe - this cluster will be able to communicate with the main cluster gke-private-cluster-europe - 此集群将能够与主集群通信

To simplify it all the clusters will have one node only.为了简化它,所有集群将只有一个节点。

Create a deployment and a service on gke-private-cluster-maingke-private-cluster-main上创建部署和服务

Below is a simple example of hello-app and a service which will expose hello-app on port 30051 :下面是一个简单的hello-app示例和一个将在端口30051上公开hello-app的服务:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  selector:
    matchLabels:
      app: hello
      version: 1.0.0
  replicas: 3
  template:
    metadata:
      labels:
        app: hello
        version: 1.0.0
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:1.0"
        env:
        - name: "PORT"
          value: "50001"
---
apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  selector:
    app: hello
  ports:
    - name: hello-port
      port: 50001
      targetPort: 50001
      nodePort: 30051
  type: NodePort

Apply it and check the internal IP address of a node which spawned this pods.应用它并检查生成此 pod 的节点的内部IP 地址。 You can check it with either:您可以通过以下任一方式进行检查:

  • GCP -> Compute Engine -> VM Instances
  • kubectl get nodes -o wide

In my case it was 10.156.0.2就我而言,它是10.156.0.2

Try to access it from gke-private-cluster-europe尝试从gke-private-cluster-europe访问它

You can SSH to a node of gke-private-cluster-europe and try to invoke command from a node: curl 10.156.0.2:30051 .您可以通过 SSH 连接到gke-private-cluster-europe的节点并尝试从节点调用命令: curl 10.156.0.2:30051 You should be able to communicate with this service and get the output like below:您应该能够与此服务通信并获得如下输出:

Hello, world!
Version: 1.0.0
Hostname: hello-5d79ccdb55-vrrbs

To check connectivity from the inside of a pod you would need a image that has already curl builtin.要从 pod 内部检查连接,您需要一个已经内置curl的图像。 Internet is a place for all kinds of awesome things and in fact there is a image that has curl available.互联网是一个存放各种令人敬畏的东西的地方,事实上,有一个可以使用 curl 的图像。 You can spawn a pod with curl with below YAML :您可以使用以下YAML生成带有 curl 的 pod:

apiVersion: v1
kind: Pod
metadata:
  name: curl
  namespace: default
spec:
  containers:
  - image: curlimages/curl
    command:
      - sleep
      - "infinity"
    imagePullPolicy: IfNotPresent
    name: curl
  restartPolicy: Always

After applying above YAML , you can exec into the pod and check for yourself with below commands:应用上述YAML ,您可以exec进入 pod 并使用以下命令检查自己:

  • $ kubectl exec -it curl -- /bin/sh
  • $ curl 10.156.0.2:30051

The output from the inside of a cluster will look like this:集群内部的输出将如下所示:

curl: (28) Failed to connect to 10.156.0.2 port 30051: Operation timed out

It worked for a node but it does not work for a pod.它适用于节点,但不适用于 pod。

Allow the traffic:允许流量:

To allow above network connectivity you will need to:要允许上述网络连接,您需要:

  • Open Google Cloud Platform打开Google Cloud Platform
    • Check the network tag of a gke-private-cluster-main node检查gke-private-cluster-main节点的网络标签
      • Go to Compute Engine转到Compute Engine
      • Find the node of a gke-private-cluster-main找到gke-private-cluster-main的节点
      • Click on it to get more details单击它以获取更多详细信息
      • Copy the network tag which should look similar to: gke-gke-private-cluster-main-80fe50b2-node复制应类似于以下内容的网络标签: gke-gke-private-cluster-main-80fe50b2-node
    • Check the pod address range of gke-private-cluster-europe :检查gke-private-cluster-europepod 地址范围
      • Go to Kubernetes Engine转到Kubernetes Engine
      • Find your gke-private-cluster-europe找到您的gke-private-cluster-europe
      • Click on it to get more details单击它以获取更多详细信息
      • Copy the pod address range which should look similar to: 10.24.0.0/14复制应类似于以下10.24.0.0/14的 pod 地址范围: 10.24.0.0/14

With network tag and pod range copied you can create your firewall rule.复制网络标记和 pod 范围后,您可以创建防火墙规则。 Please go to:请前往:

VPC Network -> Firewall rules -> Create a firewall rule

防火墙规则

Please take a specific look on parts where network tag and pod range ip is used as it will be different for you.请具体查看使用网络标签和 pod 范围 ip 的部分,因为这对您来说会有所不同。

Apply it and check again if a pod in gke-private-cluster-europe can access the 10.156.0.2:30051 .应用它并再次检查gke-private-cluster-europe的 pod 是否可以访问10.156.0.2:30051

It should give you output below:它应该给你下面的输出:

Hello, world!
Version: 1.0.0
Hostname: hello-5d79ccdb55-6s8xh

Please let me know if you have any questions in that.如果您对此有任何疑问,请告诉我。

not supported by google, only directly attached VPC can access GKE on that VPC https://issuetracker.google.com/issues/244483997不受 google 支持,只有直接连接的 VPC 才能访问该 VPC 上的 GKE https://issuetracker.google.com/issues/244483997

Connecting to Private GKE cluster using 3rd party VPN 使用 3rd 方 VPN 连接到私有 GKE 集群

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM