简体   繁体   English

Kubernetes 集群上的 Helm 安装或升级发布失败:服务器找不到请求的资源或升级失败:没有部署的发布

[英]Helm install or upgrade release failed on Kubernetes cluster: the server could not find the requested resource or UPGRADE FAILED: no deployed releases

Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.使用 helm 在我的 Kubernetes 集群上部署图表,自从有一天,我无法部署新的或升级现有的。

Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.事实上,每次我使用 helm 时,我都会收到一条错误消息,告诉我无法安装或升级资源。

If I run helm install --name foo . -f values.yaml --namespace foo-namespace如果我运行helm install --name foo . -f values.yaml --namespace foo-namespace helm install --name foo . -f values.yaml --namespace foo-namespace I have this output: helm install --name foo . -f values.yaml --namespace foo-namespace我有这个输出:

Error: release foo failed: the server could not find the requested resource错误:释放 foo 失败:服务器找不到请求的资源

If I run helm upgrade --install foo . -f values.yaml --namespace foo-namespace如果我运行helm upgrade --install foo . -f values.yaml --namespace foo-namespace helm upgrade --install foo . -f values.yaml --namespace foo-namespace or helm upgrade foo . -f values.yaml --namespace foo-namespace helm upgrade --install foo . -f values.yaml --namespace foo-namespacehelm upgrade foo . -f values.yaml --namespace foo-namespace helm upgrade foo . -f values.yaml --namespace foo-namespace I have this error: helm upgrade foo . -f values.yaml --namespace foo-namespace我有这个错误:

Error: UPGRADE FAILED: "foo" has no deployed releases错误:升级失败:“foo”没有部署的版本

I don't really understand why.我真的不明白为什么。

This is my helm version:这是我的掌舵版本:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

On my kubernetes cluster I have tiller deployed with the same version, when I run kubectl describe pods tiller-deploy-84b... -n kube-system :在我的 kubernetes 集群上,我部署了相同版本的分蘖,当我运行kubectl describe pods tiller-deploy-84b... -n kube-system

Name:               tiller-deploy-84b8...
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               k8s-worker-1/167.114.249.216
Start Time:         Tue, 26 Feb 2019 10:50:21 +0100
Labels:             app=helm
                    name=tiller
                    pod-template-hash=84b...
Annotations:        <none>
Status:             Running
IP:                 <IP_NUMBER>
Controlled By:      ReplicaSet/tiller-deploy-84b8...
Containers:
  tiller:
    Container ID:   docker://0302f9957d5d83db22...
    Image:          gcr.io/kubernetes-helm/tiller:v2.12.3
    Image ID:       docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 26 Feb 2019 10:50:28 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  helm-token-...:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  helm-token-...
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  26m   default-scheduler      Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1
  Normal  Pulling    26m   kubelet, k8s-worker-1  pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Pulled     26m   kubelet, k8s-worker-1  Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Created    26m   kubelet, k8s-worker-1  Created container
  Normal  Started    26m   kubelet, k8s-worker-1  Started container

Is someone have faced the same issue ?有人遇到过同样的问题吗?


Update:更新:

This the folder structure of my actual chart named foo: structure folder of the chart:这是我名为 foo 的实际图表的文件夹结构:图表的结构文件夹:

> templates/
  > deployment.yaml 
  > ingress.yaml
  > service.yaml
> .helmignore
> Chart.yaml 
> values.yaml

I have already tried to delete the chart in failure using the delete command helm del --purge foo but the same errors occurred.我已经尝试使用删除命令helm del --purge foo删除失败的图表,但发生了相同的错误。

Just to be more precise, the chart foo is in fact a custom chart using my own private registry.更准确地说,图表 foo 实际上是使用我自己的私有注册表的自定义图表。 ImagePullSecret are normally setting up. ImagePullSecret 正常设置。

I have run these two commands helm upgrade foo . -f values.yaml --namespace foo-namespace --force我已经运行了这两个命令helm upgrade foo . -f values.yaml --namespace foo-namespace --force helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force and I still get an error: helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force我仍然收到错误:

UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource

Notice that foo-namespace already exist.注意 foo-namespace 已经存在。 So the error don't come from the namespace name or the namespace itself.所以错误不是来自命名空间名称或命名空间本身。 Indeed, if I run helm list , I can see that the foo chart is in a FAILED status.事实上,如果我运行helm list ,我可以看到foo图表处于FAILED状态。

Tiller stores all releases as ConfigMaps in Tiller's namespace( kube-system in your case). Tiller 将所有版本作为 ConfigMap 存储在 Tiller 的命名空间中(在您的情况下为kube-system )。 Try to find broken release and delete it's ConfigMap using commands:尝试使用以下命令查找损坏的版本并删除它的 ConfigMap:

$ kubectl get cm --all-namespaces -l OWNER=TILLER
NAMESPACE     NAME               DATA   AGE
kube-system   nginx-ingress.v1   1      22h

$ kubectl delete cm  nginx-ingress.v1 -n kube-system

Next, delete all release objects (deployment,services,ingress, etc) manually and reinstall release using helm again.接下来,手动删除所有发布对象(部署、服务、入口等)并再次使用 helm 重新安装发布。

If it didn't help, you may try to download newer release of Helm (v2.14.3 at the moment) and update/reinstall Tiller.如果没有帮助,您可以尝试下载较新版本的 Helm(目前为 v2.14.3)并更新/重新安装 Tiller。

I had the same issue, but cleanup did not help also try the same helm chart on a brand new k8s cluster did not help.我遇到了同样的问题,但清理没有帮助,在全新的 k8s 集群上尝试相同的舵图也没有帮助。

So I found out that there was a missing apiVersion caused the problem.所以我发现是缺少 apiVersion 导致了这个问题。 I found it out by doing a我通过做一个

helm install xyz --dry-run

copy the output to a new test.yaml file and use将输出复制到一个新的test.yaml文件并使用

kubectl apply test.yaml

there I see the error (apiVersion line was moved to a comment line)在那里我看到错误(apiVersion 行已移至注释行)

I had the same problem but not due to broken releases.我遇到了同样的问题,但不是由于版本中断。 After upgrading helm.升级头盔后。 It seems newer versions of helm do bad with the --wait parameter.似乎较新版本的 helm 对--wait参数不利。 So for anyone facing the same issue: Just removing --wait , and leave --debug from helm upgrade parameters solved my issue.因此,对于面临相同问题的任何人:只需从helm upgrade参数中删除--wait并离开--debug解决我的问题。

I had this issue when I tried to deploy custom chart with CronJob instead deployment.当我尝试使用 CronJob 而不是部署来部署自定义图表时,我遇到了这个问题。 The error occurs on this step in deploy script.部署脚本中的此步骤发生错误 To resolve it need to add ENV Variable ROLLOUT_STATUS_DISABLED=true it is solved in this issue .要解决它,需要添加 ENV 变量ROLLOUT_STATUS_DISABLED=true它已在此问题中解决。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 错误:升级失败:“活页夹”没有已部署的版本 - Error: UPGRADE FAILED: "binder" has no deployed releases helm upgrade --install:错误:Kubernetes集群无法访问 - helm upgrade --install: Error: Kubernetes cluster unreachable 错误:安装时出错:服务器找不到请求的资源 HELM Kubernetes - Error: error installing: the server could not find the requested resource HELM Kubernetes kubernetes仪表板错误:“度量标准客户端运行状况检查失败:服务器找不到请求的资源(获取服务heapster)。” - kubernetes dashboard error : 'Metric client health check failed: the server could not find the requested resource (get services heapster).' kubernetes 升级失败,因为资源已经存在 - kubernetes upgrade failed due to resource already exists 为什么 helm upgrade --install 在先前安装失败时失败? - Why helm upgrade --install failed when previous install is failure? kubernetes apiserver“服务器找不到请求的资源” - kubernetes apiserver "the server could not find the requested resource" Kubernetes 仪表板服务器找不到请求的资源 - Kubernetes dashboard the server could not find the requested resource 通过Helm进行缓慢的安装/升级(适用于Kubernetes) - Slow install / upgrade through Helm (for Kubernetes) Helm 错误地显示升级失败状态 - Helm incorrectly shows upgrade failed status
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM