简体   繁体   English

Kubernetes NFS 持久卷权限被拒绝

[英]Kubernetes NFS persistent volumes permission denied

I have an application running over a POD in Kubernetes.我有一个应用程序在 Kubernetes 中的 POD 上运行。 I would like to store some output file logs on a persistent storage volume.我想在持久存储卷上存储一些输出文件日志。

In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim.为此,我在 NFS 上创建了一个卷,并通过相关的卷声明将其绑定到 POD。 When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.当我尝试写入或加入共享文件夹时,我收到一条“权限被拒绝”消息,因为 NFS 显然是只读的。

The following is the json file I used to create the volume:以下是我用来创建卷的json文件:

{
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "task-pv-test"
      },
      "spec": {
        "capacity": {
          "storage": "10Gi"
        },
        "nfs": {
          "server": <IPAddress>,
          "path": "/export"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Delete",
        "storageClassName": "standard"
      }
    }

The following is the POD configuration file以下是POD配置文件

kind: Pod
apiVersion: v1
metadata:
    name: volume-test
spec:
    volumes:
        -   name: task-pv-test-storage
            persistentVolumeClaim:
                claimName: task-pv-test-claim
    containers:
        -   name: volume-test
            image: <ImageName>
            volumeMounts:
            -   mountPath: /home
                name: task-pv-test-storage
                readOnly: false

Is there a way to change permissions?有没有办法更改权限?


UPDATE更新

Here are the PVC and NFS config:这是 PVC 和 NFS 配置:

PVC: PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-test-claim
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi

NFS CONFIG NFS 配置

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "nfs-client-provisioner-557b575fbc-hkzfp",
    "generateName": "nfs-client-provisioner-557b575fbc-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp",
    "uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0",
    "resourceVersion": "27228",
    "creationTimestamp": "2018-04-17T12:26:35Z",
    "labels": {
      "app": "nfs-client-provisioner",
      "pod-template-hash": "1136131967"
    },
    "ownerReferences": [
      {
        "apiVersion": "extensions/v1beta1",
        "kind": "ReplicaSet",
        "name": "nfs-client-provisioner-557b575fbc",
        "uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "nfs-client-root",
        "nfs": {
          "server": <IPAddress>,
          "path": "/Kubernetes"
        }
      },
      {
        "name": "nfs-client-provisioner-token-fdd2c",
        "secret": {
          "secretName": "nfs-client-provisioner-token-fdd2c",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "nfs-client-provisioner",
        "image": "quay.io/external_storage/nfs-client-provisioner:latest",
        "env": [
          {
            "name": "PROVISIONER_NAME",
            "value": "<IPAddress>/Kubernetes"
          },
          {
            "name": "NFS_SERVER",
            "value": <IPAddress>
          },
          {
            "name": "NFS_PATH",
            "value": "/Kubernetes"
          }
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "nfs-client-root",
            "mountPath": "/persistentvolumes"
          },
          {
            "name": "nfs-client-provisioner-token-fdd2c",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "nfs-client-provisioner",
    "serviceAccount": "nfs-client-provisioner",
    "nodeName": "det-vkube-s02",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ]
  },
  "status": {
    "phase": "Running",
    "hostIP": <IPAddress>,
    "podIP": "<IPAddress>,
    "startTime": "2018-04-17T12:26:35Z",
    "qosClass": "BestEffort"
  }
}

I have just removed some status information from the nfs config to make it shorter我刚刚从 nfs 配置中删除了一些状态信息以使其更短

If you set the proper securityContext for the pod configuration you can make sure the volume is mounted with proper permissions.如果您为 pod 配置设置了正确的securityContext ,您可以确保以正确的权限挂载该卷。

Example:例子:

apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  securityContext:
    fsGroup: 2000 
  volumes:
    - name: task-pv-test-storage
      persistentVolumeClaim:
        claimName: task-pv-test-claim
  containers:
  - name: demo
    image: example-image
    volumeMounts:
    - name: task-pv-test-storage
      mountPath: /data/demo

In the above example the storage will be mounted at /data/demo with 2000 group id, which is set by fsGroup .在上面的示例中,存储将挂载在/data/demo ,组 ID 为 2000,由fsGroup设置。 By setting the fsGroup all processes of the container will also be part of the supplementary group ID 2000, thus you should have access to the mounted files.通过设置 fsGroup 容器的所有进程也将成为补充组 ID 2000 的一部分,因此您应该可以访问挂载的文件。

You can read more about pod security context here: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/您可以在此处阅读有关 pod 安全上下文的更多信息: https : //kubernetes.io/docs/tasks/configure-pod-container/security-context/

Thanks to 白栋天 for the tip .感谢白栋天的提示 For instance, if the pod securityContext is set to:例如,如果 pod securityContext 设置为:

securityContext:
  runAsUser: 1000
  fsGroup: 1000

you would ssh to the NFS host and run您将通过 ssh 连接到 NFS 主机并运行

chown 1000:1000 -R /some/nfs/path

If you do not know the user:group or many pods will mount it, you can run如果你不知道 user:group 或者很多 pods 会挂载它,你可以运行

chmod 777 -R /some/nfs/path

一个简单的方法是访问 nfs 存储和 chmod 777,或者在您的 volume-test 容器中使用用户 ID chown

I'm a little confused from how you're trying to get things done, in any case if I'm understanding you correctly try this example:我对你如何完成事情感到有些困惑,无论如何,如果我理解你正确地尝试这个例子:

  volumeClaimTemplates:
  - metadata:
      name: data
      namespace: kube-system
      labels:
        k8s-app: something
        monitoring: something
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi

And then maybe an init container do do something:然后也许一个 init 容器做一些事情:

initContainers:
        - name: prometheus-init
          image: /something/bash-alpine:1.5
          command:
            - chown
            - -R
            - 65534:65534
            - /data
          volumeMounts:
            - name: data
              mountPath: /data

or is it the volumeMounts you're missing out on:或者是您错过的 volumeMounts:

volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: data
              mountPath: /data

My last comment would be to take note on containers, I think you're only allowed to write in /tmp or was it just for CoreOS?我的最后一条评论是注意容器,我认为您只能在/tmp写入还是仅用于 CoreOS? I'd have to look that up.我得查一下。

Have you checked the permissions of directory ?您是否检查过目录的权限? Make sure read access is available to all.确保所有人都可以访问读取权限。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM