简体   繁体   English

使用 kubectl patch 将卷添加到 Kubernetes StatefulSet

[英]Adding a volume to a Kubernetes StatefulSet using kubectl patch

Problem summary:问题总结:

I am following the Kubernetes guide to set up a sample Cassandra cluster .我正在按照Kubernetes 指南设置示例 Cassandra 集群 The cluster is up and running, and I would like to add a second volume to each node in order to try enable backups for Cassandra that would be stored on a separate volume.集群已启动并正在运行,我想向每个节点添加第二个卷,以便尝试为 Cassandra 启用备份,这些备份将存储在单独的卷上。

My attempt to a solution:我的解决方案尝试:

I tried editing my cassandra-statefulset.yaml file by adding a new volumeMounts and volumeClaimTemplates entry, and reapplying it, but got the following error message:我尝试通过添加新的volumeMountsvolumeClaimTemplates条目来编辑我的cassandra-statefulset.yaml文件,然后重新应用它,但收到以下错误消息:

$ kubectl apply -f cassandra-statefulset.yaml 
storageclass.storage.k8s.io/fast unchanged
The StatefulSet "cassandra" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

I then tried to enable rolling updates and patch my configuration following the documentation here: https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/然后我尝试按照此处的文档启用滚动更新并修补我的配置: https : //kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/

$ kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
statefulset.apps/cassandra patched (no change)

My cassandra-backup-patch.yaml :我的cassandra-backup-patch.yaml

spec:
  template:
    spec:
      containers:
        volumeMounts:
        - name: cassandra-backup
          mountPath: /cassandra_backup
  volumeClaimTemplates:
  - metadata:
      name: cassandra-backup
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: fast
      resources:
        requests:
          storage: 1Gi

However this resulted in the following error:然而,这导致了以下错误:

$ kubectl patch statefulset cassandra --patch "$(cat cassandra-backup-patch.yaml)"
The request is invalid: patch: Invalid value: "map[spec:map[template:map[spec:map[containers:map[volumeMounts:[map[mountPath:/cassandra_backup name:cassandra-backup]]]]] volumeClaimTemplates:[map[metadata:map[name:cassandra-backup] spec:map[accessModes:[ReadWriteOnce] resources:map[requests:map[storage:1Gi]] storageClassName:fast]]]]]": cannot restore slice from map

Could anyone please point me to the correct way of adding an additional volume for each node or explain why the patch does not work?谁能告诉我为每个节点添加额外卷的正确方法或解释为什么补丁不起作用? This is my first time using Kubernetes so my approach may be completely wrong.这是我第一次使用 Kubernetes,所以我的方法可能完全错误。 Any comment or help is very welcome, thanks in advance.非常欢迎任何评论或帮助,提前致谢。

The answer is in your first log:答案在您的第一个日志中:

The StatefulSet "cassandra" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy'

You can't change some fields in a statefulset after creation.创建后无法更改statefulset某些字段。 You will likely need to delete and recreate the statefulset to add a new volumeClaimTemplate .您可能需要删除并重新创建statefulset以添加新的volumeClaimTemplate

edit: It can many times be useful to leave your pods running even when you delete the statefulset .编辑:即使您删除了statefulset ,让您的 pod 继续运行也很有用。 To accomplish this use the --cascade=false flag on the delete operation.要完成此操作,请在删除操作上使用--cascade=false标志。

kubectl delete statefulset <name> --cascade=false

Then your workload will stay running while you recreate your statefulset with the updated VPC.然后,在您使用更新后的 VPC 重新创建 statefulset 时,您的工作负载将保持运行。

As mentioned by switchboard.op , deleting is the answer.正如switchboard.op所提到的,删除答案。

but

Watch out for deleting these objects:注意删除这些对象:

  • PersistentVolumeClaim ( kubectl get pvc ) PersistentVolumeClaim ( kubectl get pvc )
  • PersistentVolume ( kubectl get pv ) PersistentVolume ( kubectl get pv )

which for example in case you'd want to do just helm uninstall instead of kubectl delete statefulset/<item> will be deleted thus unless there's any other reference for the volumes and in case you don't have backups of the previous YAMLs that contain the IDs (ie not just generated from Helm templates, but from the orchestrator) you might have a painful day ahead of you.例如,如果您只想执行helm uninstall而不是kubectl delete statefulset/<item> 将被删除,除非卷有任何其他参考,并且如果您没有包含以前的 YAML 的备份ID(即不仅是从 Helm 模板生成的,而且是从编排器生成的),您可能会有痛苦的一天。

PVCs and PVs hold IDs and other reference properties for the underlying (probably/mostly?) vendor specific volume referencing by eg S3 or other object or file storage implementation used in the background as a volume in a Pod or other resources. PVC 和 PV 包含底层(可能/大部分?)供应商特定卷的 ID 和其他参考属性,例如 S3 或其他对象或文件存储实现在后台用作Pod或其他资源中的卷。

Deleting or otherwise modifying a StatefulSet if you preserve the PVC name within the spec doesn't affect mounting of the correct resource.如果在规范中保留 PVC 名称,则删除或以其他方式修改StatefulSet不会影响正确资源的安装。

If in doubt, always just copy locally the whole volume prior to doing destructive action to PVCs and PVs if you need them in the future or running commands without knowing the underlying source code eg by:如果有疑问,在对 PVC 和 PV 执行破坏性操作之前,如果将来需要它们或在不知道底层源代码的情况下运行命令,请始终在本地复制整个卷,例如:

kubectl cp <some-namespace>/<some-pod>:/var/lib/something /tmp/backup-something

and then just load it back by reversing the arguments.然后通过反转参数将其加载回来。

Also for Helm usage, delete the StatefulSet , then issue helm upgrade command and it'll fix the missing StatefulSet without touching PVCs and PVs.同样对于 Helm 使用,删除StatefulSet ,然后发出helm upgrade命令,它将修复丢失的StatefulSet而不会触及 PVC 和 PV。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM