[英]Mounting Rook Volumes in Google Kubernetes Engine (GKE)
I've been delving into Rook+Ceph for Kubernetes, trying to get it to work under Google Kubernetes Engine, and have hit a brick wall. 我一直在研究Rook + Ceph的Kubernetes,试图使其在Google Kubernetes Engine下工作,并且遇到了麻烦。
Following the documentation, I've run the following commands, and verified that each has had the intended effect: 根据文档,我运行了以下命令,并验证了每个命令是否具有预期的效果:
kubectl create -f common.yaml
kubectl create -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f filesystem.yaml
The yaml files are all default as provided by Rook , with the exception of operator.yaml
, wherein I've added the following environment variable, as per the GKE docs : yaml文件都是Rook提供的默认默认文件,
operator.yaml
除外,其中我根据GKE docs添加了以下环境变量:
- name: FLEXVOLUME_DIR_PATH
value: "/home/kubernetes/flexvolume"
I am unsure where to go from here, however. 但是,我不确定从这里去哪里。 Their documentation has a sample file for creating a registry , which leads down a rabbit-hole of instructions which appear unrelated to what I'm trying to achieve.
他们的文档中有一个用于创建注册表的示例文件,该文件导致了与我尝试实现的目标无关的繁琐指示。
Essentially, I want to be able to mount Ceph shared file storage as a volume from a regular ol' container image. 本质上,我希望能够将Ceph共享文件存储作为卷从常规ol'容器映像挂载。 I've tried the following yaml with no success:
我尝试了以下Yaml,但未成功:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
namespace: default
spec:
containers:
- name: ubuntu
image: ubuntu:bionic
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: image-store
mountPath: /mnt/shared
volumes:
- name: image-store
flexVolume:
driver: ceph.rook.io/rook
fsType: ceph
options:
fsName: myfs
clusterNamespace: rook-ceph
restartPolicy: Always
In my fairly limited knowledge, I can't see why this wouldn't work, but am getting the following output when doing kubectl describe pods ubuntu
: 以我相当有限的知识,我看不出为什么这行不通,但是在执行
kubectl describe pods ubuntu
时得到以下输出:
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/10ee80d1-8873-11e9-8f45-42010a8a01d8/volumes/ceph.rook.io~rook/image-store --scope -- mount -t ceph -o name=admin,secret=AQDAM/lcQFKHARAAh/O27Wl+iRKRzREsMML+4g==,mds_namespace=myfs 10.8.3.134:6789,10.8.13.226:6789,10.8.5.39:6789:/ /var/lib/kubelet/pods/10ee80d1-8873-11e9-8f45-42010a8a01d8/volumes/ceph.rook.io~rook/image-store
Output: Running scope as unit: run-reb61186d1ff64e6e846a200580aa5395.scope
mount: /var/lib/kubelet/pods/10ee80d1-8873-11e9-8f45-42010a8a01d8/volumes/ceph.rook.io~rook/image-store: special device 10.8.3.134:6789,10.8.13.226:6789,10.8.5.39:6789:/ does not exist.
Warning FailedMount 51s (x2 over 2m53s) kubelet, gke-kubey-cluster-default-pool-6856b374-nb0c (combined from similar events): MountVolume.SetUp failed for volume "image-store" : mount command failed, status: Failure, reason: failed to mount filesystem myfs to /var/lib/kubelet/pods/10ee80d1-8873-11e9-8f45-42010a8a01d8/volumes/ceph.rook.io~rook/image-store with monitor 10.8.3.134:6789,10.8.13.226:6789,10.8.5.39:6789:/ and options [name=admin secret=AQDAM/lcQFKHARAAh/O27Wl+iRKRzREsMML+4g== mds_namespace=myfs]: mount failed: exit status 32
Is there an example of how to achieve such a thing somewhere in the wild? 有没有一个例子可以说明如何在野外某个地方实现这种目标?
Make sure you fillfull every single prerequisities for Rook setup: rook_setup . 确保填写Rook设置的每个先决条件: rook_setup 。
Then try to install : 然后尝试安装:
nfs-common
on all your nodes. 在所有节点上
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.