简体   繁体   中英

Mounting a gcePersistentDisk kubernetes volume is very slow

I start a kubernetes replication controller. When the corresponding container in the single pod in this replication controller has a gcePersistentDisk specified the pods will start very slow. After 5 minutes the pod is still in the Pending state.

kubectl get po will tell me:

NAME          READY     STATUS    RESTARTS   AGE
app-1-a4ni7   0/1       Pending   0          5m

Without the gcePersistentDisk the pod is Running in max 30 seconds.

(I am using a 10 GB Google Cloud Storage disk and I know that these disks have lower performance for lower capacities, but I am not sure this is the issue.)

What could be the cause of this?

We've seen the GCE PD attach calls take upwards of 10 minutes to complete, so this is more or less expected. For example see https://github.com/kubernetes/kubernetes/issues/15382#issuecomment-153268655 , where PD tests were timing out before GCE PD attach/detach calls could complete. We're working with the GCE team to improve performance and reduce latency.

If the pod never gets out of pending state, then you might've hit a bug. In that case, grab your kubelet log and open an issue at https://github.com/kubernetes/kubernetes/

At least from my feeling, using PersistentVolumeClaims are working much faster. You can nearly instantly destroy and recreate replication controllers.

See: http://kubernetes.io/v1.1/docs/user-guide/persistent-volumes/README.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM