简体   繁体   English

Kubernetes:无法验证 API 服务器的身份:tcp 拨号:连接:协议不可用

[英]Kubernetes: couldn't validate the identity of the API Server: tcp dial : connect : protocol not available

After setting up the master node, worker node couldn't able to join the master.设置主节点后,工作节点无法加入主节点。 I get the error message -我收到错误消息 -

couldn't validate the identity of the API Server: Get "https://{apiserver-ip}/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp {kubeapi-ip}:6443: connect: protocol not available无法验证 API 服务器的身份:获取“https://{apiserver-ip}/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”:拨打 tcp {kubeapi-ip }:6443:连接:协议不可用

I am using VMs我正在使用虚拟机

master - 2cpu, 2 GB memory主 - 2cpu,2 GB memory

worker - 1 cpu, 1 GB memory工人 - 1 个 CPU,1 GB memory

Kubeadm init command: Kubeadm 初始化命令:

sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.56.2 

Joing command that i used:我使用的加入命令:

sudo kubeadm join 192.168.56.2:6443 --token xt3jug.8sfbnzqb4cnd7wbn \
        --discovery-token-ca-cert-hash sha256:3514a9d85abcfd9d00230beffa7731cebbd51b20e5fa66f10247e0bd473027c8

And i used flannel with pod id cidr same as i used while using kubeadm init我使用 flannel 和 pod id cidr 与使用 kubeadm init 时使用的相同

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Kubernetes Verion: 1.25.0 Kubernetes 版本:1.25.0

Using Virtual machine provision via Vagrant通过 Vagrant 使用虚拟机配置

Using Ubuntu linux使用 Ubuntu linux

I tried pinging combination and permutation among nodes as well as host and guest.我尝试在节点以及主机和来宾之间 ping 组合和排列。 Everything is good so connection is not a problem.一切都很好,所以连接不是问题。

I researched heavily on internet regarding Protocol not found error, but seems like none has ever faced such issue.我在互联网上对协议未发现错误进行了大量研究,但似乎没有人遇到过这样的问题。 I am more concerned if this issue ever happen in production how to handle it.如果这个问题在生产中发生,我更关心如何处理它。

Moreover, kubeapi server is working fine when queried via kubectl inside the master node, but outside master node (worker node, host machine) giving such issue when trying to access the same kubeapi server.此外,当通过主节点内部的 kubectl 查询时,kubeapi 服务器工作正常,但在主节点(工作节点、主机)外部尝试访问同一 kubeapi 服务器时会出现此类问题。 Which in theory should not, as kubeapiserver is accessed by all (considering cerficates and keys are properly in placed).理论上不应该,因为所有人都可以访问 kubeapiserver(考虑到证书和密钥已正确放置)。

Please look into this issue.请调查这个问题。 Atleast if someone could give an idea, what might be the root issue which caused such error to occur, would be grateful.至少如果有人能给出一个想法,导致这种错误发生的根本问题可能是什么,将不胜感激。 Or where to look for?或者去哪里找? I am new to such level of networking.我对这种级别的网络很陌生。

I guess i figured it out.我想我想通了。 Seems like there is some kind security mechanism built inside kubernetes or kubeadm i dont know.似乎在 kubernetes 或我不知道的 kubeadm 中内置了某种安全机制。 But here what i did.但在这里我做了什么。

Initial state of K8 : not able to make curl or telnet to master node at 192.168.56.2:6443 from host K8 的初始 state :无法从主机使 curl 或 telnet 到 192.168.56.2:6443 的主节点

  • ssh into masternode and made curl or https to host machine (even if its unsuccessful) ssh 成为主节点,并把 curl 或 https 变成主机(即使它不成功)

Changed state of K8 : kubernetes seems like it now know who could make a curl or telnet request to it from previous operation.更改了 K8 的state : kubernetes 似乎现在知道谁可以从之前的操作中向它发出 curl 或 telnet 请求。

  • went back to host machine and made same curl or https to master node no more "Protocol not available" issue回到主机并制作相同的 curl 或 https 到主节点不再出现“协议不可用”问题

I tested it several time with other worker and it follows the pattern.我和其他工人一起测试了几次,它遵循了这个模式。 Seems like masternode or kube-apiserver locks itself up (excluding the ssh means of connecting to it)似乎 masternode 或 kube-apiserver 将自己锁定(不包括 ssh 连接方式)

I hope this will help any person in future facing same kind of error and help save hours to figure it out.我希望这将帮助任何人在未来面临同样的错误,并帮助节省时间来解决它。

At this point, I am not able to provide any explanation.在这一点上,我无法提供任何解释。 The only things I can say is that its the pre-behaviour of K8, when master-node is ready and workers node have not joined yet.我唯一能说的是,它是 K8 的前行为,当主节点准备好而工作节点尚未加入时。

I would really appreciate if somebody find time and provide any explanation for this behavior.如果有人能抽出时间并为此行为提供任何解释,我将不胜感激。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 无法连接到服务器:拨打 tcp 10.0.12.77:443: i/o timeout - Unable to connect to the server: dial tcp 10.0.12.77:443: i/o timeout 如何使用 Terraform 修复 Azure Kube.netes 服务“错误拨号 tcp 127.0.0.1:80:连接:连接被拒绝”? - How to fix Azure Kubernetes Services with Terraform 'error dial tcp 127.0.0.1:80: connect: connection refused'? Kube.netes 挑战等待 http-01 传播:拨打 tcp:没有这样的主机 - Kubernetes challenge waiting for http-01 propagation: dial tcp: no such host AWS RDS 的 Terraform Postgresql 提供程序错误:“拨号 tcp 127.0.0.1:5432:连接:连接被拒绝” - Terraform Postgresql provider error for AWS RDS: "dial tcp 127.0.0.1:5432: connect: connection refused" 错误:发布“http://localhost/api/v1/namespaces/kube-system/configmaps”:拨打 tcp 127.0.0.1:80 - Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80 metrics-server:v0.4.2 无法在 AWS kube.netes 集群环境中抓取指标(无法验证证书,不包含任何 IP SAN) - metrics-server:v0.4.2 cannot scrape metrics inside AWS kubernetes cluster environment (cannot validate certificate, doesn't contain any IP SANs) 在 AWS Cognito 中将外部身份验证服务器配置为 OpenID Connect 身份提供商 - Configure external auth server as OpenID Connect identity provider in AWS Cognito MySQL 工作台错误:无法连接 SSH 隧道 - MySQL Workbench Error: Couldn't connect the SSH Tunnel Localstack SNS 通过 http 协议无法连接到 API 端点 - Localstack SNS by http protocol cannot connect to API endpoint WSO2 身份服务器:按用户名 api 搜索用户无效 - WSO2 Identity Server: Searching user by their username api not working
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM