简体   繁体   English

无法在 Kubernetes Pod 上挂载 NFS

[英]Unable to mount NFS on Kubernetes Pod

I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster.我正在努力在 Kubernetes minikube 集群上部署 Hyperledger Fabric 测试网络。 I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers.我打算使用 PersistentVolume 在各种对等方和订购者之间共享 cytpo-config 和通道工件。 Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml以下是我的 PersistentVolume.yaml 和 PersistentVolumeClaim.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: persistent-volume
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: "/nfsroot"
    server: "3.128.203.245"
    readOnly: false

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: persistent-volume-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Following is the pod where the above claim is mount on /data以下是上述声明安装在 /data 上的 pod

kind: Pod
apiVersion: v1
metadata:
  name: test-shell
  labels:
    name: test-shell
spec:
  containers:
    - name: shell
      image: ubuntu
      command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"] 
      volumeMounts:
      - mountPath: "/data"
        name: pv
  volumes:
    - name: pv
      persistentVolumeClaim:
        claimName: persistent-volume-claim

NFS is setup on my EC2 instance. NFS 在我的 EC2 实例上设置。 I have verified NFS server is working fine and I was able to mount it inside minikube.我已经验证 NFS 服务器工作正常,并且能够将它安装在 minikube 中。 I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data我不明白我在做什么错,但是 3.128.203.245:/nfsroot 中存在的任何文件都不存在于 test-shell:/data

What point am I missing.我错过了什么。 I even tried hostPath mount but to no avail.我什至尝试了 hostPath 挂载,但无济于事。 Please help me out.请帮帮我。

I think you should check the following things to verify that NFS is mounted successfully or not我认为您应该检查以下内容以验证 NFS 是否已成功挂载

  1. run this command on the node where you want to mount.在要挂载的节点上运行此命令。

    $showmount -e nfs-server-ip

like in my case $showmount -e 172.16.10.161 Export list for 172.16.10.161: /opt/share *就像我的情况一样$showmount -e 172.16.10.161的导出列表:/opt/share *

  1. use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share使用$df -hT命令查看是否已安装 NFS,在我的情况下它会给出 output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share

  2. if not mounted then use the following command如果未安装,则使用以下命令

    $sudo mount -t nfs 172.16.10.161:/opt/share /opt/share

  3. if the above commands show an error then check firewall is allowing nfs or not如果上述命令显示错误,则检查防火墙是否允许 nfs

    $sudo ufw status

  4. if not then allow using the command如果没有,则允许使用该命令

    $sudo ufw allow from nfs-server-ip to any port nfs

I made the same setup I don't face any issues.我做了同样的设置,我没有遇到任何问题。 My k8s cluster of fabric is running successfully .我的 k8s 结构集群运行成功 The hf k8s yaml files can be found at my GitHub repo . hf k8s yaml 文件可以在我的GitHub repo中找到。 There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.在那里,我在超级账本结构上部署了银行联盟,这是一个动态的多主机区块链网络,这意味着您可以在现有运行的区块链网络中的 go 上添加组织、对等点、加入对等点、创建通道、安装和实例化链代码。

By default in minikube you should have default StorageClass :默认情况下,在 minikube 中你应该有默认的StorageClass

Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.每个 StorageClass 包含字段 provisioner、parameters 和 reclaimPolicy,当需要动态配置属于 class 的 PersistentVolume 时使用这些字段。

For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used.例如,NFS 不提供内部配置器,但可以使用外部配置器。 There are also cases when 3rd party storage vendors provide their own external provisioner.在某些情况下,第 3 方存储供应商会提供自己的外部配置器。

Change the default StorageClass 更改默认 StorageClass

In your example this property can lead to problems.在您的示例中,此属性可能会导致问题。 In order to list enabled addons in minikube please use:为了列出 minikube 中启用的插件,请使用:

minikube addons list 

To list all StorageClasses in your cluster use:要列出集群中的所有 StorageClass,请使用:

kubectl get sc
NAME                 PROVISIONER
standard (default)   k8s.io/minikube-hostpath

Please note that at most one StorageClass can be marked as default.请注意,最多可以将一个 StorageClass 标记为默认值。 If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.如果其中两个或多个被标记为默认值,则无法创建没有明确指定 storageClassName 的 PersistentVolumeClaim。

In your example the most probable scenario is that you have already default StorageClass .在您的示例中,最可能的情况是您已经拥有默认 StorageClass Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass).应用这些资源导致:新 PV 创建(没有 StoraglClass),新 PVC 创建(参考现有的默认 StorageClass)。 In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:在这种情况下,您的自定义 pv/pvc 绑定之间没有引用)作为示例,请看一下:

kubectl get pv,pvc,sc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE
persistentvolume/nfs                                        3Gi        RWX            Retain           Available                                             50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871   1Gi        RWX            Delete           Bound       default/pvc-nfs   standard                50m

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc-nfs   Bound    pvc-8aeb802f-cd95-4933-9224-eb467aaa9871   1Gi        RWX            standard       50m

NAME                                             PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  103m

This example will not work due to:由于以下原因,此示例将不起作用:

  • new persistentvolume/nfs has been created (without reference to pvc)新的持久化卷/nfs已创建(不参考 pvc)
  • new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass.使用默认 StorageClass 创建了新的 persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).在声明部分中,我们可以注意到这个 pv 是由于使用默认 StorageClass并参考 default/pvc-nfs声明 (persistentvolumeclaim/pvc-nfs ) 的动态 pv 配置而创建的。

Solution 1.解决方案 1。

According to the information from the comments:根据评论中的信息:

Also I am able to connect to it within my minikube and also my actual ubuntu system.我还可以在我的 minikube 以及我的实际 ubuntu 系统中连接到它。 I you are able to mount from inside minikube host this nfs share我可以从 minikube 主机内部挂载这个 nfs 共享

If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:如果您将 nfs 共享挂载到您的 minikube 节点,请尝试直接从您的 pod 使用此示例和hostpath卷:

apiVersion: v1
kind: Pod
metadata:
  name: test-shell
  namespace: default
spec:
  volumes:
  - name: pv
    hostPath:
      path: /path/shares # path to nfs mount point on minikube node
  containers:
  - name: shell
    image: ubuntu
    command: ["/bin/bash", "-c", "sleep 1000 "]
    volumeMounts:
    - name: pv
      mountPath: /data

Solution 2.解决方案 2。

If you are using PV/PVC approach:如果您使用 PV/PVC 方法:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: persistent-volume
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
  nfs:
    path: "/nfsroot"
    server: "3.128.203.245"
    readOnly: false
  claimRef:
    name: persistent-volume-claim
    namespace: default  

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: persistent-volume-claim
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
  volumeName: persistent-volume

Note:笔记:

If you are not referencing any provisioner associated with your StorageClass Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster.如果您没有引用任何与您的StorageClass Helper 关联的配置程序,则可能需要与卷类型相关的程序来消费集群中的 PersistentVolume。 In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.在本例中,PersistentVolume 是 NFS 类型,需要帮助程序 /sbin/mount.nfs 来支持 NFS 文件系统的挂载。

Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv .请记住,当您创建 pvc kubernetes 时,持久控制器正在尝试将 pvc 与正确的 pv 绑定 During this process different factors are take into account like: storageClassName (default/custom), accessModes , claimRef , volumeName .在此过程中,会考虑不同的因素,例如: storageClassName (默认/自定义)、 accessModesclaimRefvolumeName In this case you can use: PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume在这种情况下,您可以使用: PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume

Note : 注意

The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster.控制平面可以将 PersistentVolumeClaims 绑定到集群中匹配的 PersistentVolumes。 However, if you want a PVC to bind to a specific PV, you need to pre-bind them.但是,如果您希望 PVC 绑定到特定 PV,则需要预先绑定它们。

By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC.通过在 PersistentVolumeClaim 中指定 PersistentVolume,您声明了该特定 PV 和 PVC 之间的绑定。 If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.如果 PersistentVolume 存在并且没有通过其 claimRef 字段保留 PersistentVolumeClaims,则 PersistentVolume 和 PersistentVolumeClaim 将被绑定。

The binding happens regardless of some volume matching criteria, including node affinity.无论某些卷匹配标准如何,包括节点亲和性,绑定都会发生。 The control plane still checks that storage class, access modes, and requested storage size are valid.控制平面仍然检查存储 class、访问模式和请求的存储大小是否有效。

Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:一旦创建了 PV/pvc 或 pv/pvc 绑定出现任何问题,请使用以下命令计算当前 state:

kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod 
kubectl get events

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM