简体   繁体   English

Nodeport SFTP / SSH 使用 kube-proxy ipvs 时连接超时

[英]Nodeport SFTP / SSH connection timeout when using kube-proxy ipvs

We are currently migrating from Docker Swarm to k8s (bare metal) and we cant reach the SFTP service in the pod.我们目前正在从 Docker Swarm 迁移到 k8s(裸机),我们无法访问 pod 中的 SFTP 服务。 Service:服务:

Name:                     mlflow-artifacts-store
Namespace:                mlflow
Labels:                   app.kubernetes.io/instance=mlflow-artifacts-store
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=mlflow-artifacts-store
                          helm.sh/chart=mlflow-artifacts-store-0.1.0
Annotations:              meta.helm.sh/release-name: mlflow-artifacts-store
                          meta.helm.sh/release-namespace: mlflow
Selector:                 app.kubernetes.io/instance=mlflow-artifacts-store,app.kubernetes.io/name=mlflow-artifacts-store
Type:                     NodePort
IP:                       10.233.24.136
Port:                     ssh  80/TCP
TargetPort:               22/TCP
NodePort:                 ssh  30001/TCP
Endpoints:                10.233.93.77:22
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

But I can't reach it even from the same server: (Timeout added for demonstration)但是我什至无法从同一台服务器访问它:(为演示添加了超时)

OpenSSH_7.4p1 Debian-10+deb9u7, OpenSSL 1.0.2u  20 Dec 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 10.233.24.136 [10.233.24.136] port 30001.
debug1: fd 3 clearing O_NONBLOCK
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4p1 Debian-10+deb9u7
Connection timed out during banner exchange
Couldn't read packet: Connection reset by peer

The server itself is reachable from a different pod in the same namespace - therefore I guess it has probably something to do with the NodePort expose itself or the configuration.服务器本身可以从同一命名空间中的不同 pod 访问 - 因此我猜它可能与 NodePort 暴露自身或配置有关。

Exposing the service with hostPort works, but I don't want to expose it like this.使用 hostPort 公开服务是可行的,但我不想这样公开它。 What do I miss?我想念什么?

So after discussing the whole thing on github the behaviour is intended.所以在 github 上讨论了整个事情之后,行为是有意的。 When using kube-proxy with ipvs the service is not reachable by the ipvs0 interface.将 kube-proxy 与 ipvs 一起使用时,ipvs0 接口无法访问该服务。 https://github.com/kube.netes/kube.netes/issues/93674#issuecomment-669200021 https://github.com/kube.netes/kube.netes/issues/93674#issuecomment-669200021

ipvs is down by default and is just a dummy.network without traffic. ipvs 默认关闭,只是一个没有流量的虚拟网络。 In the documentation it's written like在文档中它是这样写的

The default for --nodeport-addresses is an empty list. --nodeport-addresses 的默认值是一个空列表。 This means that kube-proxy should consider all available.network interfaces for NodePort.这意味着 kube-proxy 应该考虑 NodePort 的所有 available.network 接口。

But ipvs0 is not reachable - this is intended.但是 ipvs0 不可访问 - 这是有意的。 So the answer is: You should use a real address of the node.所以答案是:你应该使用节点的真实地址。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM