简体   繁体   中英

Update K8 storage class, persistent volume, and persistent volume claim when K8 secret is updated

I have a K8 cluster that has smb mounted drives connected to an AWS Storage Gateway / file share. We've recently undergone a migration of that SGW to another AWS account and while doing that the IP address and password for that SGW changed.

I noticed that our existing setup has a K8 storage class that looks for a K8 secret called "smbcreds". In that K8 secret they have keys "username" and "password". I'm assuming it's in line with the setup guide for the Helm chart we're using "csi-driver-smb".

I assumed changing the secret used for the storage class would update everything downstream that uses that storage class, but apparently it does not. I'm obviously a little cautious when it comes to potentially blowing away important data, what do I need to do to update everything to use the new secret and IP config?

Here is a simple example of our setup in Terraform -

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "minikube"
}

resource "helm_release" "container_storage_interface_for_aws" {
  count      = 1
  name       = "local-filesystem-csi"
  repository = "https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts"
  chart      = "csi-driver-smb"
  namespace  = "default"
}

resource "kubernetes_storage_class" "aws_storage_gateway" {
  count = 1
  metadata {
    name = "smbmount"
  }
  storage_provisioner = "smb.csi.k8s.io"
  reclaim_policy      = "Retain"
  volume_binding_mode = "WaitForFirstConsumer"
  parameters = {
    source                                           = "//1.2.3.4/old-file-share"
    "csi.storage.k8s.io/node-stage-secret-name"      = "smbcreds"
    "csi.storage.k8s.io/node-stage-secret-namespace" = "default"
  }
  mount_options = ["vers=3.0", "dir_mode=0777", "file_mode=0777"]
}

resource "kubernetes_persistent_volume_claim" "aws_storage_gateway" {
  count = 1
  metadata {
    name = "smbmount-volume-claim"
  }
  spec {
    access_modes = ["ReadWriteMany"]
    resources {
      requests = {
        storage = "10Gi"
      }
    }
    storage_class_name = "smbmount"
  }
}


resource "kubernetes_deployment" "main" {
  metadata {
    name = "sample-pod"
  }
  spec {
    replicas = 1

    selector {
      match_labels = {
        app = "sample-pod"
      }
    }

    template {
      metadata {
        labels = {
          app = "sample-pod"
        }
      }

      spec {
        volume {
          name = "shared-fileshare"

          persistent_volume_claim {
            claim_name = "smbmount-volume-claim"
          }
        }

        container {
          name              = "ubuntu"
          image             = "ubuntu"
          command           = ["sleep", "3600"]
          image_pull_policy = "IfNotPresent"

          volume_mount {
            name       = "shared-fileshare"
            read_only  = false
            mount_path = "/data"
          }
        }
      }
    }
  }
}

My original change was to change the K8 secret "smbcreds" and change source = "//1.2.3.4/old-file-share" to source = "//5.6.7.8/new-file-share"

The solution I settled on was to create a second K8 storage class and persistent volume claim that's connected to the new AWS Storage Gateway. I then switched the K8 deployments to use the new PVC.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM