简体   繁体   English

在没有云提供商负载均衡器的情况下,将 kubernetes 集群暴露给“世界”

[英]Exposing kubernetes clusteer to “the world” without Load Balancer of cloud provider

Thus far I've set up a kubernetes cluster that runs my NodeJS deployment.到目前为止,我已经建立了一个运行我的 NodeJS 部署的 kubernetes 集群。 I am now ready to expose it to "the world" and after reading up on services to do this, I believe all of them require a Load Balancer.我现在准备将它暴露给“世界”,并且在阅读了执行此操作的服务之后,我相信它们都需要负载均衡器。 Usually these Load Balancers are created by a cloud provider that is hosting kubernetes.通常这些负载均衡器是由托管 kubernetes 的云提供商创建的。 I came across several limitations with these, some are priced highly, some have limits on connections etc...我遇到了一些限制,有些价格很高,有些对连接有限制等等......

I am now trying to figure out how to avoid these Load Balancers and expose my kubernetes cluster, but in a performant, secure and manageable way.我现在正试图弄清楚如何避免这些负载均衡器并公开我的 kubernetes 集群,但是以一种高性能、安全和可管理的方式。 I've looked through documentation and there seem to be mentionings of things like NodePort and Ingress .我查看了文档,似乎提到了NodePortIngress之类的东西。 As far as I understood NodePort only works for a single machine in the cluster?据我了解, NodePort仅适用于集群中的单台机器? and Ingress still requires traffic to come from somewhere, usually a Load Balancer.并且Ingress仍然需要来自某个地方的流量,通常是负载均衡器。

This is my current manifest, where should I go from here in terms of exposing it to the public, ideally with a method that allows SSL certs, rate limiting etc... usual stuff you'd need in production这是我目前的清单,我应该从这里 go 向公众公开,理想情况下使用允许 SSL 证书、速率限制等的方法......生产中需要的常见东西

development.yaml开发.yaml

---
# ClusterIP
apiVersion: v1
kind: Service
metadata:
  name: development-actions-cip
spec:
  type: ClusterIP
  selector:
    app: development-actions
  ports:
    - protocol: TCP
      port: 80
      targetPort: 4000
---
# Actions NodeJS server
apiVersion: apps/v1
kind: Deployment
metadata:
  name: development-actions
spec:
  replicas: 1
  selector:
    matchLabels:
      app: development-actions
  template:
    metadata:
      labels:
        app: development-actions
    spec:
      containers:
        - image: my-image/latest
          name: development-actions
          ports:
            - containerPort: 4000
              protocol: TCP

To solve the problem there are some ways:要解决这个问题有一些方法:

  1. You can use a service known as MetalLB which is a popularly used for bare metal deployments.您可以使用称为 MetalLB 的服务,该服务通常用于裸机部署。 It provides a network load balancer.它提供了一个网络负载均衡器。
  2. If you do not want to use the load balancer provided by the cloud provider you can create your custom load balancer with help of a reverse proxy (may be Nginx service).如果您不想使用云提供商提供的负载均衡器,您可以借助反向代理(可能是 Nginx 服务)创建自定义负载均衡器。 This machine can be a dedicated instance that can be only loaded with routing and load balancing capabilities.这台机器可以是一个专用实例,只能加载路由和负载平衡功能。 The ingress controller that you create after this can be allowed to take traffic from this machine.在此之后创建的入口 controller 可以被允许从该机器获取流量。 This is user defined edge creation.这是用户定义的边缘创建。
  3. As mentioned in the solution above you can use hostNetwork:true with your nginx-ingress Pods so that these machines can directly be accessed over the machine network.如上面的解决方案中所述,您可以将hostNetwork:true与您的 nginx-ingress Pod 一起使用,以便可以通过机器网络直接访问这些机器。
  4. You can use the externalIP directly with the nginx ingress pods where you can directly assign a public IP to the service and connect the service over the internet.您可以直接将 externalIP 与 nginx 入口 pod 一起使用,您可以在其中直接将公共 IP 分配给服务并通过 Internet 连接服务。

For more information and setup details visit the official documentation for Nginx ingress at: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service有关更多信息和设置详细信息,请访问 Nginx 入口的官方文档,网址为: https://kubernetes.github.io/ingress-nginx-anode/deployport

I have tried all of these options for deploying my application and my suggestion would be that if you are using some cloud service to deploy your cluster use the cloud service provider's load balancer as it is much more secure, highly available, and reliable.我已经尝试了所有这些选项来部署我的应用程序,我的建议是,如果您使用某些云服务来部署集群,请使用云服务提供商的负载均衡器,因为它更加安全、高度可用和可靠。 I you are using on premise deployments go for the user defined edge creation, or MetalLB service我正在使用本地部署 go 用于用户定义的边缘创建或 MetalLB 服务

You could deploy the nginx ingress controller in a selected and dedicated kubernetes node using hostNetwork: true .您可以使用hostNetwork: true在选定的专用 kubernetes 节点中部署 nginx 入口 controller 。 This would mean nginx will listen on port 80 and 443 on the host VM network.这意味着 nginx 将侦听主机 VM 网络上的端口80443 Assign floating public IP to the VM.将浮动公共 IP 分配给 VM。 Add the public IP of the VM as A record into your DNS providers configuration to route traffic for your domain to the VM.将 VM 的公共 IP 作为A record添加到 DNS 提供程序配置中,以将域的流量路由到 VM。

Then for all the backends pods just create clusterIP service and ingress resource to expose it to outside world.然后对于所有后端 pod 只需创建 clusterIP 服务和入口资源以将其公开给外部世界。

To make it HA you could replicate the same setup to more than one kubernetes nodes.要使其成为 HA,您可以将相同的设置复制到多个 kubernetes 节点。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM