简体   繁体   中英

Kubernetes Persistent Volume not shows the real capacity

I have a persistent volume in my cluster (Azure disk) that contains 8Gi. I resized it to contain 9Gi, then changed my PV yaml to 9Gi as well (since it is not updated automatically) and everything worked fine. Then I made a test and changed the yaml of my PV to 1000Gi (and expected to see an error) and received error from my pvc that claims this PV: "NodeExpand failed to expand the volume: rpc error: code = Internal desc = resize requested for 10, but after resizing volume size was 9" However, if I typed kubectl get pv, it is still looks like this PV capacity is 1000Gi (and of course that in Azure this is still 9Gi since I not resized it). Any advice?

As a general rule: you should not have to change anything on your PersistentVolumes.

When you request more space, editing a PersistentVolumeClaim: a controller (either CSI, or in-tree driver/kube-controllers) would implement that change against your storage provider (ceph, aws,...).

Once done expanding the backend volume, that same controller would update the corresponding PV. At which point, you may (or might not) have to restart the Pods attached to your volume, for its filesystem to be grown.

While I'm not certain how to fix the error you saw: one way to avoid those would be to refrain from editing PVs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM