简体   繁体   English

Kubernetes - 在所有 pod 上生成文件

[英]Kubernetes - Generate files on all the pods

I have Java API which exports the data to an excel and generates a file on the POD where the request is served.我有 Java API,它将数据导出到 excel 并在提供请求的 POD 上生成一个文件。 Now the next request (to download the file) might go to a different POD and the download fails.现在下一个请求(下载文件)可能会转到不同的 POD 并且下载失败。

How do I get around this?我该如何解决这个问题? How do I generate files on all the POD?如何在所有 POD 上生成文件? Or how do I make sure the subsequent request goes to the same POD where file was generated?或者如何确保后续请求发送到生成文件的同一个 POD? I cant give the direct POD URL as it will not be accessible to clients.我无法提供直接的 POD URL,因为客户无法访问它。

Thanks.谢谢。

You need to use a persistent volumes to share the same files between your containers.您需要使用持久卷在容器之间共享相同的文件。 You could use the node storage mounted on containers (easiest way) or other distributed file system like NFS, EFS (AWS), GlusterFS etc...您可以使用安装在容器上的节点存储(最简单的方法)或其他分布式文件系统,如 NFS、EFS (AWS)、GlusterFS 等...

If you you need a simplest to share the file and your pods are in the same node, you could use hostpath to store the file and share the volume with other containers.如果您需要最简单的方式来共享文件并且您的 pod 位于同一个节点中,您可以使用hostpath来存储文件并与其他容器共享卷。

Assuming you have a kubernetes cluster that has only one Node, and you want to share the path /mtn/data of your node with your pods:假设您有一个只有一个 Node 的 kubernetes 集群,并且您希望与您的 Pod 共享您节点的路径/mtn/data

Create a PersistentVolume:创建一个 PersistentVolume:

A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage. hostPath PersistentVolume 使用节点上的文件或目录来模拟网络附加存储。

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Create a PersistentVolumeClaim:创建一个 PersistentVolumeClaim:

Pods use PersistentVolumeClaims to request physical storage Pod 使用 PersistentVolumeClaims 请求物理存储

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

Look at the PersistentVolumeClaim:查看 PersistentVolumeClaim:

kubectl get pvc task-pv-claim

The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume .输出显示 PersistentVolumeClaim 绑定到您的 PersistentVolume, task-pv-volume

NAME            STATUS    VOLUME           CAPACITY   ACCESSMODES   STORAGECLASS   AGE
task-pv-claim   Bound     task-pv-volume   10Gi       RWO           manual         30s

Create a deployment with 2 replicas for example:创建一个具有 2 个副本的部署,例如:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: task-pv-claim
      containers:
        - name: task-pv-container
          image: nginx
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/mnt/data"
              name: task-pv-storage

Now you can check inside both container the path /mnt/data has the same files.现在您可以在两个容器中检查路径/mnt/data是否具有相同的文件。

If you have cluster with more than 1 node I recommend you to think about the other types ofpersistent volumes .如果您有超过 1 个节点的集群,我建议您考虑其他类型的持久卷

References: Configure persistent volumesPersistent volumes Volume Types参考资料: 配置持久卷持久卷卷类型

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM