[英]How do I create a persistent volume claim with ReadWriteMany in GKE?
What is the best way to create a persistent volume claim with ReadWriteMany attaching the volume to multiple pods?使用 ReadWriteMany 将卷附加到多个 pod 创建持久卷声明的最佳方法是什么?
Based off the support table in https://kube.netes.io/docs/concepts/storage/persistent-volumes , GCEPersistentDisk does not support ReadWriteMany natively.根据https://kube.netes.io/docs/concepts/storage/persistent-volumes中的支持表,GCEPersistentDisk 本身不支持 ReadWriteMany。
What is the best approach when working in the GCP GKE world?在 GCP GKE 世界中工作时最好的方法是什么? Should I be using a clustered file system such as CephFS or Glusterfs?
我应该使用 CephFS 或 Glusterfs 等集群文件系统吗? Are there recommendations on what I should be using that is production ready?
是否有关于我应该使用哪些已准备好生产的建议?
I was able to get an NFS pod deployment configured following the steps here - https://medium.com/platformer-blog/nfs-persistent-volumes-with-kube.netes-a-case-study-ce1ed6e2c266 however it seems a bit hacky and adds another layer of complexity.我能够按照此处的步骤配置 NFS pod 部署 - https://medium.com/platformer-blog/nfs-persistent-volumes-with-kube.netes-a-case-study-ce1ed6e2c266但它似乎是有点hacky并增加了另一层复杂性。 It also seems to only allow one replica (which makes sense as the disk can't be mounted multiple times) so if/when the pod goes down, my persistent storage will as well.
它似乎也只允许一个副本(这很有意义,因为磁盘不能多次安装)所以如果/当 pod 出现故障时,我的持久存储也会出现故障。
I agree that it's disappointing but it's a consequence of the use of persistent disk which does not permit attaching to multiple instances read-write. 我同意这是令人失望的,但这是使用永久磁盘的结果,它不允许连接到多个读写实例。
I've had success with NFS and with the limitations you describe. 我在NFS方面取得了成功,并且有了你所描述的局限性。
You could -- as you state -- use Gluster or similar too. 你可以 - 正如你所说的那样 - 使用Gluster或类似的东西。
A more expensive albeit managed Google Cloud alternative is Cloud Filestore: https://cloud.google.com/filestore/docs/accessing-fileshares 虽然管理的Google Cloud替代品更为昂贵,但却是Cloud Filestore: https ://cloud.google.com/filestore/docs/accessing-fileshares
Your questions suggests that you need NFS-like semantics but, if you don't, you may consider using Google Cloud Storage. 您的问题表明您需要类似NFS的语义,但如果不这样做,您可以考虑使用Google云端存储。
It's possible now with Cloud Filestore . 现在可以使用Cloud Filestore 。
First create a Filestore instance. 首先创建一个Filestore实例。
gcloud filestore instances create nfs-server
--project=[PROJECT_ID]
--zone=us-central1-c
--tier=STANDARD
--file-share=name="vol1",capacity=1TB
--network=name="default",reserved-ip-range="10.0.0.0/29"
Then create a persistent volume in GKE. 然后在GKE中创建一个持久卷。
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileserver
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
path: /vol1
server: [IP_ADDRESS]
[IP_ADDRESS] is available in filestore instance details. [IP_ADDRESS]在文件存储实例详细信息中可用。
You can now request a persistent volume claim. 您现在可以请求持久量声明。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fileserver-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: "fileserver"
resources:
requests:
storage: 100G
Finally, mount the volume in your pod. 最后,将卷安装在您的pod中。
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my container
image: nginx:latest
volumeMounts:
- mountPath: /workdir
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: fileserver-claim
readOnly: false
Solution is detailed here : https://cloud.google.com/filestore/docs/accessing-fileshares 解决方案详见: https : //cloud.google.com/filestore/docs/accessing-fileshares
Just in case you are using Terraform already to manage your GKE, you can also use this terraform module to handle "nfs-server" creation & management for you.以防万一您已经在使用 Terraform 来管理您的 GKE,您也可以使用这个terraform 模块来为您处理“nfs-server”的创建和管理。
Personally, I found this very handy in my situation as it's cost-effective and I don't have to manually create the "nfs-server".就个人而言,我发现这在我的情况下非常方便,因为它具有成本效益,而且我不必手动创建“nfs-server”。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.