简体   繁体   中英

mount failed: exit status 32 when use EBS volume at Kubernetes

I'm a little confused with k8s setup cluster at AWS. I'm trying to use a EBS volume as persistent storage but can't find information enough (I missing something or this is all the docs for aws provider https://kubernetes.github.io/cloud-provider-aws/ ?

When I try to apply a deploy config to my cluster, the output from kubectl describe pods is that:

  Type     Reason                  Age              From                     Message
  ----     ------                  ----             ----                     -------
  Normal   Scheduled               7s               default-scheduler        Successfully assigned default/mssql-deploy-67885c9f84-9xx7c to ip-172-31-0-215.sa-east-1.compute.internal
  Normal   SuccessfulAttachVolume  4s               attachdetach-controller  AttachVolume.Attach succeeded for volume "mssql-volume"Normal   SuccessfulAttachVolume  4s               attachdetach-controller  AttachVolume.Attach succeeded for volume "mssql-volume"
  Warning  FailedMount             3s (x4 over 6s)  kubelet                  MountVolume.SetUp failed for volume "mssql-volume" : mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-02efbeface5569c51 /var/lib/kubelet/pods/01537252-4323-4e7c-9f05-a2a730498ecd/volumes/kubernetes.io~aws-ebs/mssql-volume
Output: mount: /var/lib/kubelet/pods/01537252-4323-4e7c-9f05-a2a730498ecd/volumes/kubernetes.io~aws-ebs/mssql-volume: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-02efbeface5569c51 does not exist.

I setup the master node with what I think this is the necessary requisites to use aws provider, like: set hostname to priv dns, extraArgs cloud-provider aws on ClusterConfig, set role in the EC2 instances like cloud-provider-aws docs (control plane to master, node to node).

Then I join node to the cluster with the followed file (via kubeadm join --config node.yaml ):

apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: "TOKEN-FROM-MASTER"
    apiServerEndpoint: "IP-PORT-FROM-MASTER"
    caCertHashes:
      - "SHA-FROM-MASTER"
nodeRegistration:
  name: $(hostname)
  kubeletExtraArgs:
    cloud-provider: aws

As the cluster was able to attach the volume (and this is true because I verify at the AWS console), I think the problem is with the kubelet on the node.

Despite the aws provider documentation is very weak the solution was simple. In reality my searches were wrong and the important part of the error is the final, where we read "special device... does not exist".

With this I found this answer here in stackoverflow https://stackoverflow.com/a/60325491/1169158 .

At the end, all we need to do is were add the flag --cloud-provider=aws in the /var/lib/kubelet/kubeadm-flags.env in all nodes + master.

Hope this can be useful.

I saw that exact issue with a StatefulSet Pod for a Concourse worker. The Pod was pending in an Init state, as it turns out it wasn't able to mount the PV it had a PVC for.

What I did was:

kubectl delete pvc <id> -n <namespace> & kubectl delete pod concourse-worker-0 -n <namespace> The StatefulSet recreated both objects and the Concourse worker Pod was able to start successfully.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM