简体   繁体   English

如何为Kubernetes配置'efs-provider'?

[英]How to configure the 'efs-provider' for Kubernetes?

I have followed the steps from this guide this guide to deploy the efs-provider for Kubernetes and bind an EFS filesystem. 我按照这个步骤指南指南部署efs-provider的Kubernetes并绑定一个EFS文件系统。 I have not succed. 我没有成功。

I am implementing Kubernetes with Amazon EKS and I use EC2 instances as worker nodes, all are deployed using eksctl . 我正在使用Amazon EKS实现Kubernetes,并且将EC2实例用作工作程序节点,所有实例均使用eksctl进行部署。

After I applied this adjusted manifest file , the result is: 应用此调整后的清单文件后 ,结果是:

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS
efs-provisioner-#########-#####   1/1     Running   0       

$ kubectl get pvc
NAME       STATUS    VOLUME
test-pvc   Pending   efs-storage

No matter how much time I wait, the status of my PVC stucks in Pending . 无论我等待多少时间,PVC的状态都停留在Pending

After the creation of a Kubernetes cluster and worker nodes and the configuration of the EFS filesystem, I apply the efs-provider manifest with all the variables pointing to the EFS filesystem. 创建Kubernetes集群和工作程序节点并配置EFS文件系统后,我将efs-provider清单与所有指向EFS文件系统的变量一起应用。 In the StorageClass configuration file is specified the spec.AccessModes field as ReadWriteMany . StorageClass配置文件中,将spec.AccessModes字段指定为ReadWriteMany

At this point my efs-provider pod is running without errors and the status of the PVC is Pending . 至此,我的efs-provider pod可以正常运行,并且PVC的状态为Pending What can it be? 会是什么 How can I configure the efs-provider to use the EFS filesystem? 如何配置efs-provider以使用EFS文件系统? How much should I wait to get the PVC status in Bound ? 我应该等待多少时间才能获得BoundPVC状态?


Update 更新

About the configuration of the Amazon Web Services, these is what I have done: 关于Amazon Web Services的配置,这些是我所做的:

  • After the creation of the EFS filesystem, I have created a mount point for each subnet where my nodes are. 创建EFS文件系统后,我为节点所在的每个子网创建了一个安装点
  • To each mount point is attached a security group with a inbound rule to grant the access to the NFS port (2049) from the security group of each nodegroup. 向每个安装点连接一个具有入站规则的安全组 ,以授予从每个节点组的安全组对NFS端口(2049)的访问权限。

The description of my EFS security group is: 我的EFS安全组的描述是:

{
    "Description": "Communication between the control plane and worker nodes in cluster",
    "GroupName": "##################",
    "IpPermissions": [
        {
        "FromPort": 2049,
        "IpProtocol": "tcp",
        "IpRanges": [],
        "Ipv6Ranges": [],
        "PrefixListIds": [],
        "ToPort": 2049,
        "UserIdGroupPairs": [
            {
            "GroupId": "sg-##################",
            "UserId": "##################"
            }
        ]
        }
    ],
    "OwnerId": "##################",
    "GroupId": "sg-##################",
    "IpPermissionsEgress": [
        {
        "IpProtocol": "-1",
        "IpRanges": [
            {
            "CidrIp": "0.0.0.0/0"
            }
        ],
        "Ipv6Ranges": [],
        "PrefixListIds": [],
        "UserIdGroupPairs": []
        }
    ],
    "VpcId": "vpc-##################"
}

Deployment 部署

The output of the kubectl describe deploy ${DEPLOY_NAME} command is: kubectl describe deploy ${DEPLOY_NAME}命令的输出为:

$ DEPLOY_NAME=efs-provisioner; \
> kubectl describe deploy ${DEPLOY_NAME}
Name:               efs-provisioner
Namespace:          default
CreationTimestamp:  ####################
Labels:             app=efs-provisioner
Annotations:        deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"efs-provisioner","namespace":"default"},"spec"...
Selector:           app=efs-provisioner
Replicas:           1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
Labels:           app=efs-provisioner
Service Account:  efs-provisioner
Containers:
efs-provisioner:
Image:      quay.io/external_storage/efs-provisioner:latest
Port:       <none>
Host Port:  <none>
Environment:
FILE_SYSTEM_ID:    <set to the key 'file.system.id' of config map 'efs-provisioner'>    Optional: false
AWS_REGION:        <set to the key 'aws.region' of config map 'efs-provisioner'>        Optional: false
DNS_NAME:          <set to the key 'dns.name' of config map 'efs-provisioner'>          Optional: true
PROVISIONER_NAME:  <set to the key 'provisioner.name' of config map 'efs-provisioner'>  Optional: false
Mounts:
/persistentvolumes from pv-volume (rw)
Volumes:
pv-volume:
Type:      NFS (an NFS mount that lasts the lifetime of a pod)
Server:    fs-#########.efs.##########.amazonaws.com
Path:      /
ReadOnly:  false
Conditions:
Type           Status  Reason
----           ------  ------
Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   efs-provisioner-576c67cf7b (1/1 replicas created)
Events:
Type    Reason             Age   From                   Message
----    ------             ----  ----                   -------
Normal  ScalingReplicaSet  106s  deployment-controller  Scaled up replica set efs-provisioner-576c67cf7b to 1

Pod Logs 吊舱日志

The output of the kubectl logs ${POD_NAME} command is: kubectl logs ${POD_NAME}命令的输出为:

$ POD_NAME=efs-provisioner-576c67cf7b-5jm95; \
> kubectl logs ${POD_NAME}
E0708 16:03:46.841229       1 efs-provisioner.go:69] fs-#########.efs.##########.amazonaws.com
I0708 16:03:47.049194       1 leaderelection.go:187] attempting to acquire leader lease  default/kubernetes.io-aws-efs...
I0708 16:03:47.061830       1 leaderelection.go:196] successfully acquired lease default/kubernetes.io-aws-efs
I0708 16:03:47.062791       1 controller.go:571] Starting provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
I0708 16:03:47.062877       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"kubernetes.io-aws-efs", UID:"f7c682cd-a199-11e9-80bd-1640944916e4", APIVersion:"v1", ResourceVersion:"3914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5 became leader
I0708 16:03:47.162998       1 controller.go:620] Started provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!

StorageClass StorageClass

The output of the kubectl describe sc ${STORAGE_CLASS_NAME} command is: kubectl describe sc ${STORAGE_CLASS_NAME}命令的输出为:

$ STORAGE_CLASS_NAME=aws-efs; \
> kubectl describe sc ${STORAGE_CLASS_NAME}
Name:            aws-efs
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"aws-efs"},"provisioner":"aws-efs"}
Provisioner:           aws-efs
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

PersistentVolumeClaim PersistentVolumeClaim

The output of the kubectl describe pvc ${PVC_NAME} command is: kubectl describe pvc ${PVC_NAME}命令的输出为:

$ PVC_NAME=efs; \
> kubectl describe pvc ${PVC_NAME}
Name:          efs
Namespace:     default
StorageClass:  aws-efs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"aws-efs"},"name":"...
volume.beta.kubernetes.io/storage-class: aws-efs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type       Reason              Age                 From                         Message
----       ------              ----                ----                         -------
Warning    ProvisioningFailed  43s (x12 over 11m)  persistentvolume-controller  no volume plugin matched
Mounted By:  <none>

About the questions 关于问题

  1. Do you have the EFS filesystem id properly configured for your efs-provisioner ? 您是否为efs-provisioner正确配置了EFS文件系统ID?

    • Yes, both (from the fs and the configured) match. 是的,两者(来自fs和已配置)都匹配。
  2. Do you have the proper IAM credentials to access this EFS? 您是否具有访问此EFS的正确IAM凭据?

    • Yes, my user has and also the eksctl tool configures it. 是的,我的用户拥有,并且也可以使用eksctl工具对其进行配置。
  3. Does that EFS path specified for your provisioner exist? 为您的供应商指定的EFS路径是否存在?

    • Yes, is only the root (/) path. 是的,仅是根(/)路径。
  4. Did you add an EFS endpoint to the subnet that your worker node(s) are running on, or ensure that your EFS subnets have an Internet Gateway attached? 您是否将EFS端点添加到运行工作节点的子网中,或确保EFS子网已连接Internet网关?

    • Yes, I have added the EFS endpoints to the subnet that my worker node(s) are running on. 是的,我已将EFS端点添加到运行我的工作程序节点的子网中。
  5. Did you set your security group to allow the Inbound for NFS port(s)? 您是否将安全组设置为允许NFS端口入站?

    • Yes. 是。

I have solved my issue by replacing the provisioner name of my StorageClass from kubernetes.io/aws-efs to only aws-efs . 我已通过将StorageClass的配置者名称从kubernetes.io/aws-efsaws-efs解决了我的问题。

As we can read on this issue comment on Github posted by wongma7 : 正如我们在阅读这个问题的评论张贴在Github wongma7

The issue is that provisioner is kubernetes.io/aws-efs . 问题是kubernetes.io/aws-efskubernetes.io/aws-efs It can't begin with kubernetes.io as that is reserved by kubernetes. 它不能以kubernetes.io开头,因为它是kubernetes保留的。

That solves the ProvisioningFailed on the Events produced on the PersistentVolumeClaim by the persistentvolume-controller . 这解决了persistentvolume-controllerPersistentVolumeClaim上生成的事件上的ProvisioningFailed

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM