简体   繁体   中英

Mounting AWS Secrets Manager on Kubernetes/Helm chart

I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial , I set up my values in values.yaml file someway like this

secretsData:
  secretName: aws-secrets
  providerName: aws
  objectName: CodeBuild

Now I have created a secrets provider class as AWS recommends: secret-provider.yaml

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secret-provider-class

spec:
  provider: {{ .Values.secretsData.providerName }}
  parameters:
    objects: |
      - objectName: "{{ .Values.secretsData.objectName }}"
        objectType: "secretsmanager"
        jmesPath: 
            - path: SP1_DB_HOST
              objectAlias: SP1_DB_HOST
            - path: SP1_DB_USER
              objectAlias: SP1_DB_USER
            - path: SP1_DB_PASSWORD
              objectAlias: SP1_DB_PASSWORD
            - path: SP1_DB_PATH
              objectAlias: SP1_DB_PATH
  secretObjects:
    - secretName: {{ .Values.secretsData.secretName }} 
      type: Opaque
      data:
        - objectName: SP1_DB_HOST
          key: SP1_DB_HOST
        - objectName: SP1_DB_USER
          key: SP1_DB_USER
        - objectName: SP1_DB_PASSWORD
          key: SP1_DB_PASSWORD
        - objectName: SP1_DB_PATH
          key: SP1_DB_PATH

I mount this secret object in my deployment.yaml , the relevant section of the file looks like this:

volumeMounts:
            - name: secrets-store-volume
              mountPath: "/mnt/secrets"
              readOnly: true
          env:
            - name: SP1_DB_HOST
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.secretsData.secretName }}
                  key: SP1_DB_HOST
            - name: SP1_DB_PORT
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.secretsData.secretName }}
                  key: SP1_DB_PORT

further down in same deployment file, I define secrets-store-volume as:

      volumes:
        - name: secrets-store-volume
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: aws-secret-provider-class

All drivers are installed into cluster and permissions are set accordingly

with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.

Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kube.netes secret that you're using to reference values

Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kube.netes secret - syncSecret.enabled (which is false by default)

So make sure that secrets-store-csi-driver runs with this flag set to true, for example:

helm upgrade --install csi-secrets-store \
--namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \
--set grpcSupportedProviders="aws" --set syncSecret.enabled="true"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM