[英]Nodeport SFTP / SSH connection timeout when using kube-proxy ipvs
我们目前正在从 Docker Swarm 迁移到 k8s(裸机),我们无法访问 pod 中的 SFTP 服务。 服务:
Name: mlflow-artifacts-store
Namespace: mlflow
Labels: app.kubernetes.io/instance=mlflow-artifacts-store
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mlflow-artifacts-store
helm.sh/chart=mlflow-artifacts-store-0.1.0
Annotations: meta.helm.sh/release-name: mlflow-artifacts-store
meta.helm.sh/release-namespace: mlflow
Selector: app.kubernetes.io/instance=mlflow-artifacts-store,app.kubernetes.io/name=mlflow-artifacts-store
Type: NodePort
IP: 10.233.24.136
Port: ssh 80/TCP
TargetPort: 22/TCP
NodePort: ssh 30001/TCP
Endpoints: 10.233.93.77:22
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
但是我什至无法从同一台服务器访问它:(为演示添加了超时)
OpenSSH_7.4p1 Debian-10+deb9u7, OpenSSL 1.0.2u 20 Dec 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 10.233.24.136 [10.233.24.136] port 30001.
debug1: fd 3 clearing O_NONBLOCK
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4p1 Debian-10+deb9u7
Connection timed out during banner exchange
Couldn't read packet: Connection reset by peer
服务器本身可以从同一命名空间中的不同 pod 访问 - 因此我猜它可能与 NodePort 暴露自身或配置有关。
使用 hostPort 公开服务是可行的,但我不想这样公开它。 我想念什么?
所以在 github 上讨论了整个事情之后,行为是有意的。 将 kube-proxy 与 ipvs 一起使用时,ipvs0 接口无法访问该服务。 https://github.com/kube.netes/kube.netes/issues/93674#issuecomment-669200021
ipvs 默认关闭,只是一个没有流量的虚拟网络。 在文档中它是这样写的
--nodeport-addresses 的默认值是一个空列表。 这意味着 kube-proxy 应该考虑 NodePort 的所有 available.network 接口。
但是 ipvs0 不可访问 - 这是有意的。 所以答案是:你应该使用节点的真实地址。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.