简体   繁体   English

具有 IPVS 模式的 Kube-proxy 不保持连接

[英]Kube-proxy with IPVS mode doesn't keep a connection

I have a k8s cluster with an ipvs kube-proxy mode and a database cluster outside of k8s.我有一个带有ipvs kube-proxy 模式的 k8s 集群和一个 k8s 之外的数据库集群。

In order to get access to the DB cluster I created service and endpoints resources:为了访问数据库集群,我创建了服务和端点资源:

---
apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306

---
apiVersion: v1
kind: Endpoints
metadata:
  name: database
subsets:
- addresses:
  - ip: 192.168.255.9
  - ip: 192.168.189.76
  ports:
  - port: 3306
    protocol: TCP

Then I run a pod with MySQL client and try to connect to this service:然后我用 MySQL 客户端运行一个 pod 并尝试连接到这个服务:

mysql -u root -p password -h database

In the network dump I see a successful TCP handshake and successful MySQL connection.在网络转储中,我看到成功的 TCP 握手和成功的 MySQL 连接。 On the node where the pod is running (hereinafter the worker node) I see the next established connection:在 pod 正在运行的节点(以下称为工作节点)上,我看到了下一个已建立的连接:

sudo netstat-nat -n | grep 3306
tcp   10.0.198.178:52642             192.168.189.76:3306            ESTABLISHED

Then I send some test queries from the pod in an opened MySQL session.然后我从打开的 MySQL session 中的 pod 发送一些测试查询。 They all are sent to the same node.它们都被发送到同一个节点。 It's expected behavior.这是预期的行为。

Then I monitor established connections on the worker node.然后我监视工作节点上已建立的连接。 After about 5 minutes the established connection to the database node is missed.大约 5 分钟后,与数据库节点的已建立连接丢失。

But in the network dump I see that TCP finalization packets are not sent from the worker node to the database node.但是在网络转储中,我看到 TCP 完成数据包没有从工作节点发送到数据库节点。 As a result, I get a leaked connection on the database node.结果,我在数据库节点上得到了一个泄漏的连接。

How ipvs decides to drop an established connection? ipvs如何决定放弃已建立的连接? If ipvs drops a connection, why it doesn't finalize TCP connection properly?如果ipvs断开连接,为什么它不能正确完成 TCP 连接? Is it a bug or do I misunderstand something with an ipvs mode in kube-proxy?这是一个错误还是我误解了 kube-proxy 中的ipvs模式?

Kube-proxy and Kubernetes don't help to balance persistent connections. Kube-proxy 和 Kubernetes 无助于平衡持久连接。

The whole concept of the long-lived connections in Kubernetes is well described in this article : Kubernetes 中的长寿命连接的整个概念在本文中得到了很好的描述:

Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others. Kubernetes 不会对长期连接进行负载平衡,并且某些 Pod 可能会收到比其他 Pod 更多的请求。 If you're using HTTP/2, gRPC, RSockets, AMQP or any other long-lived connection such as a database connection, you might want to consider client-side load balancing.如果您使用 HTTP/2、gRPC、RSockets、AMQP 或任何其他长期连接(例如数据库连接),您可能需要考虑客户端负载平衡。

I recommend going through the whole thing but overall it can be summed up with:我建议通过整个事情,但总的来说可以总结为:

  • Kubernetes Services are designed to cover most common uses for web applications. Kubernetes 服务旨在涵盖 web 应用程序的最常见用途。

  • However, as soon as you start working with application protocols that use persistent TCP connections, such as databases, gRPC, or WebSockets, they fall apart.但是,一旦您开始使用使用持久 TCP 连接的应用程序协议,例如数据库、gRPC 或 WebSocket,它们就会分崩离析。

  • Kubernetes doesn't offer any built-in mechanism to load balance long-lived TCP connections. Kubernetes 不提供任何内置机制来负载平衡长寿命 TCP 连接。

  • Instead, you should code your application so that it can retrieve and load balance upstreams client-side.相反,您应该对应用程序进行编码,以便它可以在上游客户端检索和负载平衡。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM