简体   繁体   English

API 网关,用于使用 Kubernetes 运行的服务?

[英]API gateway for services running with Kubernetes?

We have all our services running with Kubernetes.我们的所有服务都使用 Kubernetes 运行。 We want to know what is the best practice to deploy our own API gateway, we thought of 2 solutions:我们想知道部署我们自己的 API 网关的最佳实践是什么,我们想到了 2 个解决方案:

  1. Deploy API gateways outside the Kubernetes cluster(s), ie with Kong.在 Kubernetes 集群之外部署 API 网关,即使用 Kong。 This means the clusters' ingress will connect to the external gateways.这意味着集群的入口将连接到外部网关。 The gateway is either VM or physical machines, and you can scale by replicating many gateway instances网关是虚拟机或物理机,您可以通过复制许多网关实例进行扩展

  2. Deploy gateway from within Kubernetes (then maybe connect to external L4 load balancer), ie Ambassador.从 Kubernetes 内部部署网关(然后可能连接到外部 L4 负载均衡器),即大使。 However, with this approach, each cluster can only have 1 gateway.但是,使用这种方法,每个集群只能有 1 个网关。 The only way to prevent fault-tolerance is to actually replicate the entire K8s cluster防止容错的唯一方法是实际复制整个 K8s 集群

What is the typical setup and what is better?什么是典型的设置,什么是更好的?

The typical setup for an api gateway in kubernetes is either using a load balancer service, if the cloud provider that you are using support dynamic provision of load balancers (all major cloud vendors like gcp, aws or azure support it), or even more common to use an ingress controller. kubernetes 中 api 网关的典型设置是使用负载均衡器服务,如果您使用的云提供商支持动态提供负载均衡器(所有主要云供应商,如 gcp、aws 或 ZCF04A02E37B7574FC311A48F6,甚至更常见的 CCC77 支持它)使用入口 controller。

Both of these options can scale horizontally so you have fault tolerance, in fact there is already a solution for ingress controller using kong这两个选项都可以水平扩展,因此您具有容错能力,实际上已经有使用 kong 的入口 controller 的解决方案

https://github.com/Kong/kubernetes-ingress-controller https://github.com/Kong/kubernetes-ingress-controller

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM