简体   繁体   English

无法将数据保存在 Kubernetes (Google Cloud) 中的持久卷上

[英]Can't keep data on my persistent volume in Kubernetes (Google Cloud)

I have a Redis pod on my Kubernetes cluster on Google Cloud.我在 Google Cloud 上的 Kubernetes 集群上有一个 Redis pod。 I have built pv and the claim.我已经建立了光伏和索赔。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: redis-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: my-size 
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: postgres
  name: redis-pv-claim
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: my size 

I also mounted it in my deployment.yaml我还将它安装在我的 deployment.yaml 中

volumeMounts:
      - mountPath: /data
        name: redis-pv-claim
    volumes:
    - name: redis-pv-claim
      persistentVolumeClaim:
        claimName: redis-pv-claim  

I can't see any error while running describe pod运行 describe pod 时我看不到任何错误

Volumes:
  redis-pv-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  redis-pv-claim
    ReadOnly:   false

But it just can't save any key.但它无法保存任何密钥。 After every deployment, the "/data" folder is just empty.每次部署后,“/data”文件夹都是空的。

My NFS is active now but i still cant keep data .我的 NFS 现在处于活动状态,但我仍然无法保留数据。

Describe pvc描述聚氯乙烯


Namespace:     my namespace 
StorageClass:  nfs-client
Status:        Bound
Volume:        pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: nfs-client
               volume.beta.kubernetes.io/storage-provisioner: cluster.local/ext1-nfs-client-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    my grafana pod
Events:        <none>

Describe pod gives me an error though.描述 pod 给了我一个错误。


Warning  FailedMount  18m   kubelet, gke-devcluster-pool-1-36e6a393-rg7d  MountVolume.SetUp failed for volume "pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 --scope -- /
home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.1.21:/mnt/nfs/development-test-claim-pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60
/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Output: Running scope as unit: run-ra5925a8488ef436897bd44d526c57841.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot

What is happening is that when you have multiple nodes using PVC to share files between pods isn't the best approach.正在发生的事情是,当您有多个节点使用 PVC 在 pod 之间共享文件时,这并不是最好的方法。

PVCs can share files between pods residing in the same node. PVC 可以在驻留在同一节点中的 Pod 之间共享文件。 So if I have multiple nodes sometimes I may have the impression that my files aren't being stored properly.因此,如果我有多个节点,有时我可能会觉得我的文件没有正确存储。

The ideal solution for you is to use any DSF solution available.您的理想解决方案是使用任何可用的DSF解决方案。 In your question you mention that you are using GCP and it's not clear if you are using GKE or if you created your cluster on top of compute instances.在您的问题中,您提到您正在使用 GCP,并且不清楚您是使用 GKE 还是在计算实例之上创建了集群。

If you are using GKE, have you already checked this document?如果您正在使用 GKE,您是否已经查看过文档? Please let me know.请告诉我。

If you have access to your nodes, the easiest setup you can have is to create a NFS server in one of your nodes and use nfs-client-provisioner to provide access to the nfs server from your pods.如果您有权访问您的节点,那么您可以拥有的最简单设置是在您的一个节点中创建一个 NFS 服务器,并使用nfs-client-provisioner从您的 pod 提供对 nfs 服务器的访问。

I've been using this approach for quite a while now and it works really well.我已经使用这种方法有一段时间了,而且效果很好。

1 - Install and configure NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution): 1 - 在我的主节点上安装和配置 NFS 服务器(Debian Linux,这可能会因您的 Linux 发行版而异):

Before installing the NFS Kernel server, we need to update our system's repository index:在安装 NFS 内核服务器之前,我们需要更新我们系统的存储库索引:

$ sudo apt-get update

Now, run the following command in order to install the NFS Kernel Server on your system:现在,运行以下命令以在您的系统上安装 NFS 内核服务器:

$ sudo apt install nfs-kernel-server

Create the Export Directory创建导出目录

$ sudo mkdir -p /mnt/nfs_server_files

As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy):由于我们希望所有客户端都访问该目录,因此我们将通过以下命令删除导出文件夹的限制性权限(根据您的安全策略,这可能因您的设置而异):

$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files

Assign server access to client(s) through NFS export file通过 NFS 导出文件为客户端分配服务器访问权限

$ sudo nano /etc/exports

Inside this file, add a new line to allow access from other servers to your share.在此文件中,添加一个新行以允许从其他服务器访问您的共享。

/mnt/nfs_server_files        10.128.0.0/24(rw,sync,no_subtree_check)

You may want to use different options in your share.您可能希望在共享中使用不同的选项。 10.128.0.0/24 is my k8s internal network. 10.128.0.0/24是我的k8s内网。

Export the shared directory and restart the service to make sure all configuration files are correct.导出共享目录并重新启动服务以确保所有配置文件都正确。

$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server

Check all active shares:检查所有活动共享:

$ sudo exportfs
/mnt/nfs_server_files
                10.128.0.0/24

2 - Install NFS Client on all my Worker Nodes: 2 - 在我所有的工作节点上安装 NFS 客户端:

$ sudo apt-get update
$ sudo apt-get install nfs-common

At this point you can make a test to check if you have access to your share from your worker nodes:此时,您可以进行测试以检查您是否可以从您的工作节点访问您的共享:

$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client

Notice that at this point you can use the name of your master node.请注意,此时您可以使用主节点的名称。 K8s is taking care of the DNS here. K8s 在这里处理 DNS。 Check if the volume mounted as expected and create some folders and files to male sure everything is working fine.检查卷是否按预期安装并创建一些文件夹和文件以确保一切正常。

$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file

Go back to your master node and check if these files are at /mnt/nfs_server_files folder.返回您的主节点并检查这些文件是否位于 /mnt/nfs_server_files 文件夹中。

3 - Install NFS Client Provisioner . 3 - 安装 NFS 客户端配置程序

Install the provisioner using helm:使用 helm 安装配置器:

$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner

Notice that I've specified a namespace for it.请注意,我已经为它指定了一个命名空间。 Check if they are running:检查它们是否正在运行:

$ kubectl get pods -n nfs
NAME                                         READY   STATUS      RESTARTS   AGE
ext-nfs-client-provisioner-f8964b44c-2876n   1/1     Running     0          84s

At this point we have a storageclass called nfs-client:此时我们有一个名为 nfs-client 的存储类:

$ kubectl get storageclass -n nfs
NAME         PROVISIONER                                AGE
nfs-client   cluster.local/ext-nfs-client-provisioner   5m30s

We need to create a PersistentVolumeClaim:我们需要创建一个 PersistentVolumeClaim:

$ more nfs-client-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: nfs 
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
$ kubectl apply -f nfs-client-pvc.yaml

Check the status (Bound is expected):检查状态(预期绑定):

$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5   1Mi        RWX            nfs-client     24s

4 - Create a simple pod to test if we can read/write out NFS Share: 4 - 创建一个简单的 pod 来测试我们是否可以读/写 NFS 共享:

Create a pod using this yaml:使用这个 yaml 创建一个 pod:

apiVersion: v1
kind: Pod
metadata:
  name: pod0
  labels:
    env: test
  namespace: nfs  
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
$ kubectl apply -f pod.yaml

Let's list all mounted volumes on our pod:让我们列出 pod 上所有已安装的卷:

$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem                                                                               Size  Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1   99G   11G   84G  11% /mnt

As we can see, we have a NFS volume mounted on /mnt.正如我们所见,我们在 /mnt 上挂载了一个 NFS 卷。 (Important to notice the path kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1 ) (重要的是要注意路径kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1

Let's check it:让我们检查一下:

root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov  5 08:33 .
drwxr-xr-x 1 root   root    4096 Nov  5 08:38 ..

It's empty.它是空的。 Let's create some files:让我们创建一些文件:

$ for i in 1 2; do touch file$i; done;
$ ls -l 
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov  5 08:58 .
drwxr-xr-x 1 root   root    4096 Nov  5 08:38 ..
-rw-r--r-- 1 nobody nogroup    0 Nov  5 08:58 file1
-rw-r--r-- 1 nobody nogroup    0 Nov  5 08:58 file2

Now let's where are these files on our NFS Server (Master Node):现在让我们看看 NFS 服务器(主节点)上的这些文件在哪里:

$ cd /mnt/nfs_server_files
$ ls -l 
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov  5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l 
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov  5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov  5 09:11 file2

And here are the files we just created inside our pod!这是我们刚刚在 pod 中创建的文件!

Please let me know if this solution helped you.如果此解决方案对您有帮助,请告诉我。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM