简体   繁体   English

仅使用内部 IP 在 (GKE) Kubernetes 中公开服务

[英]Exposing service in (GKE) Kubernetes only with internal ip

TL;DR In a GKE private cluster, I'm unable to expose service with internal/private IP. TL;DR 在 GKE 私有集群中,我无法使用内部/私有 IP 公开服务。

We have our deployment consisting of around 20 microservices and 4 monoliths, currently running entirely on VMs on GoogleCloud.我们的部署由大约 20 个微服务和 4 个单体组成,目前完全运行在 GoogleCloud 上的虚拟机上。 I'm trying to move this infrastructure to GKE.我正在尝试将此基础架构迁移到 GKE。 The first step of the project is to build a private GKE cluster (ie without any public IP) as a replacement of our staging.该项目的第一步是构建一个私有 GKE 集群(即没有任何公共 IP)作为我们的 staging 的替代。 As this is staging, I need to expose all the microservice endpoints along with the monolith endpoints internally for debugging purpose (means, only to those connected to the VPC) and that is where I'm stuck.由于这是登台,我需要在内部公开所有微服务端点以及单体端点以用于调试目的(意味着,仅对那些连接到 VPC 的端点),这就是我卡住的地方。 I tried 2 approaches:我尝试了两种方法:

  1. Put an internal load balancer (ILB) in front of each service and monolith.在每个服务和单体之前放置一个内部负载平衡器 (ILB)。 Example:例子:
apiVersion: v1
kind: Service
metadata:
  name: session
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    app: session
    type: ms
spec:
  type: LoadBalancer
  selector:
    app: session
  ports:
  - name: grpc
    port: 80
    targetPort: 80
    protocol: TCP

在此处输入图片说明

This works, though with severe limitation.这有效,但有严重的限制。 ILB creates a forwarding rule, and GCP has a limitation of 75 forwarding rule per network . ILB 创建转发规则, GCP限制为每个网络 75 个转发规则 This means we can not build more than 3 clusters in a network.这意味着我们不能在一个网络中构建超过 3 个集群。 Not acceptable to us.我们不能接受。

  1. a.一种。 I tried placing an ingress controller in front of all the services, which always exposes the entire cluster with a public IP - an absolute no-no.我尝试在所有服务前面放置一个入口控制器,它始终使用公共 IP 公开整个集群 - 绝对禁止。
apiVersion: extensions/v1beta1
kind: Ingress
hostNetwork: true
metadata:
  name: ingress-ms-lb
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: gs03
    http:
      paths:
      - path: /autodelivery/*
        backend:
          serviceName: autodelivery
          servicePort: 80
      - path: /session/*
        backend:
          serviceName: session
          servicePort: 80

b.I tried using a nginx ingress controller which ends up not having an ip at all.我尝试使用一个 nginx 入口控制器,结果根本没有 ip。


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-ms-lb
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    # cloud.google.com/load-balancer-type: "Internal"
    nginx.ingress.kubernetes.io/ingress.class: nginx
    kubernetes.io/ingress.class: "nginx"
    # nginx.ingress.kubernetes.io/whitelist-source-range: 10.100.0.0/16, 10.110.0.0/16
spec:
  rules:
  - host: svclb
    http:
      paths:
      - path: /autodelivery/*
        backend:
          serviceName: autodelivery
          servicePort: 80
      - path: /session/*
        backend:
          serviceName: session
          servicePort: 80

The third option is to configure firewall rules, which will cut off any access to the public IPs.第三个选项是配置防火墙规则,这将切断对公共 IP 的任何访问。 This was rejected internally, given the security concerns.考虑到安全问题,这在内部被拒绝。

I'm stuck at this stage and need some pointers to move forward.我被困在这个阶段,需要一些指示才能继续前进。 Please help请帮忙

I could see from the screenshot that you attached that your GKE cluster is a private cluster.我可以从您附加的屏幕截图中看到您的 GKE 集群是一个私有集群。

As you would like to reach your services and applications inside the GKE Cluster from all the resources in the same VPC Network, I would like to suggest you to use NodePort [1].由于您希望从同一 VPC 网络中的所有资源访问 GKE 集群内的服务和应用程序,我建议您使用 NodePort [1]。

[1] https://cloud.google.com/kubernetes-engine/docs/concepts/service#service_of_type_nodeport [1] https://cloud.google.com/kubernetes-engine/docs/concepts/service#service_of_type_nodeport

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM