简体   繁体   English

更新 K8 存储 class、持久卷和更新 K8 机密时的持久卷声明

[英]Update K8 storage class, persistent volume, and persistent volume claim when K8 secret is updated

I have a K8 cluster that has smb mounted drives connected to an AWS Storage Gateway / file share.我有一个 K8 集群,它的 smb 挂载驱动器连接到 AWS 存储网关/文件共享。 We've recently undergone a migration of that SGW to another AWS account and while doing that the IP address and password for that SGW changed.我们最近将该 SGW 迁移到另一个 AWS 账户,同时该 SGW 的 IP 地址和密码发生了变化。

I noticed that our existing setup has a K8 storage class that looks for a K8 secret called "smbcreds".我注意到我们现有的设置有一个 K8 存储 class 用于查找名为“smbcreds”的 K8 机密。 In that K8 secret they have keys "username" and "password".在那个 K8 秘密中,他们有密钥“用户名”和“密码”。 I'm assuming it's in line with the setup guide for the Helm chart we're using "csi-driver-smb".我假设它符合我们使用“csi-driver-smb”的 Helm 图表的设置指南

I assumed changing the secret used for the storage class would update everything downstream that uses that storage class, but apparently it does not.我假设更改用于存储 class 的秘密将更新使用该存储 class 的所有下游内容,但显然它没有。 I'm obviously a little cautious when it comes to potentially blowing away important data, what do I need to do to update everything to use the new secret and IP config?当谈到可能泄露重要数据时,我显然有点谨慎,我需要做什么来更新所有内容以使用新密钥和 IP 配置?

Here is a simple example of our setup in Terraform -这是我们在 Terraform 中设置的简单示例 -

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "minikube"
}

resource "helm_release" "container_storage_interface_for_aws" {
  count      = 1
  name       = "local-filesystem-csi"
  repository = "https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts"
  chart      = "csi-driver-smb"
  namespace  = "default"
}

resource "kubernetes_storage_class" "aws_storage_gateway" {
  count = 1
  metadata {
    name = "smbmount"
  }
  storage_provisioner = "smb.csi.k8s.io"
  reclaim_policy      = "Retain"
  volume_binding_mode = "WaitForFirstConsumer"
  parameters = {
    source                                           = "//1.2.3.4/old-file-share"
    "csi.storage.k8s.io/node-stage-secret-name"      = "smbcreds"
    "csi.storage.k8s.io/node-stage-secret-namespace" = "default"
  }
  mount_options = ["vers=3.0", "dir_mode=0777", "file_mode=0777"]
}

resource "kubernetes_persistent_volume_claim" "aws_storage_gateway" {
  count = 1
  metadata {
    name = "smbmount-volume-claim"
  }
  spec {
    access_modes = ["ReadWriteMany"]
    resources {
      requests = {
        storage = "10Gi"
      }
    }
    storage_class_name = "smbmount"
  }
}


resource "kubernetes_deployment" "main" {
  metadata {
    name = "sample-pod"
  }
  spec {
    replicas = 1

    selector {
      match_labels = {
        app = "sample-pod"
      }
    }

    template {
      metadata {
        labels = {
          app = "sample-pod"
        }
      }

      spec {
        volume {
          name = "shared-fileshare"

          persistent_volume_claim {
            claim_name = "smbmount-volume-claim"
          }
        }

        container {
          name              = "ubuntu"
          image             = "ubuntu"
          command           = ["sleep", "3600"]
          image_pull_policy = "IfNotPresent"

          volume_mount {
            name       = "shared-fileshare"
            read_only  = false
            mount_path = "/data"
          }
        }
      }
    }
  }
}

My original change was to change the K8 secret "smbcreds" and change source = "//1.2.3.4/old-file-share" to source = "//5.6.7.8/new-file-share"我最初的更改是更改 K8 机密“smbcreds”并将source = "//1.2.3.4/old-file-share"更改为source = "//5.6.7.8/new-file-share"

The solution I settled on was to create a second K8 storage class and persistent volume claim that's connected to the new AWS Storage Gateway.我确定的解决方案是创建第二个 K8 存储 class 和连接到新 AWS Storage Gateway 的持久卷声明。 I then switched the K8 deployments to use the new PVC.然后我切换了 K8 部署以使用新的 PVC。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM