简体   繁体   English

Kubernetes PetSet - 持久卷的创建失败

[英]Kubernetes PetSet - FailedCreate of persistent volume

I'm trying to setup a Kubernetes PetSet as described in the documentation.我正在尝试按照文档中的描述设置 Kubernetes PetSet。 When I create the PetSet I can't seem to get the Persistent Volume Claim to bind to the persistent volume.当我创建 PetSet 时,我似乎无法将持久卷声明绑定到持久卷。 Here is my Yaml File for defining the PetSet:这是我用于定义 PetSet 的 Yaml 文件:

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: 'ml-nodes'
spec:
  serviceName: "ml-service"
  replicas: 1
  template:
    metadata:
      labels:
        app: marklogic
        tier: backend
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
        - name: 'ml'
          image: "192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1"
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
              name: ml8000
              protocol: TCP
            - containerPort: 8001
              name: ml8001
            - containerPort: 7997
              name: ml7997
            - containerPort: 8002
              name: ml8002
            - containerPort: 8040
              name: ml8040
            - containerPort: 8041
              name: ml8041
            - containerPort: 8042
              name: ml8042
          volumeMounts:
            - name: ml-data
              mountPath: /data/vol-data
          lifecycle:
            preStop:
              exec:
                # SIGTERM triggers a quick exit; gracefully terminate instead
                command: ["/etc/init.d/MarkLogic stop"]
      volumes:
        - name: ml-data
          persistentVolumeClaim:
            claimName: ml-data 
      terminationGracePeriodSeconds: 30
  volumeClaimTemplates:
    - metadata:
        name: ml-data
        annotations:
          volume.alpha.kubernetes.io/storage-class: anything
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2Gi

If I do a 'describe' on my created PetSet I see the following:如果我对我创建的 PetSet 进行“描述”,我会看到以下内容:

Name:           ml-nodes
Namespace:      default
Image(s):       192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1
Selector:       app=marklogic,tier=backend
Labels:         app=marklogic,tier=backend
Replicas:       1 current / 1 desired
Annotations:        <none>
CreationTimestamp:  Tue, 20 Sep 2016 13:23:14 -0400
Pods Status:        0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen LastSeen    Count   From        SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----        -------------   --------    ------          -------
  33m       33m     1   {petset }           Warning     FailedCreate        pvc: ml-data-ml-nodes-0, error: persistentvolumeclaims "ml-data-ml-nodes-0" not found
  33m       33m     1   {petset }           Normal      SuccessfulCreate    pet: ml-nodes-0

I'm trying to run this in a minikube environment on my local machine.我正在尝试在本地机器上的 minikube 环境中运行它。 Not sure what I'm missing here???不知道我在这里错过了什么???

There is an open issue on minikube for this. minikube 上有一个未解决的问题 Persistent volume provisioning support appears to be unfinished in minikube at this time.目前 minikube 中的持久卷配置支持似乎尚未完成。

For it to work with local storage, it needs the following flag on the controller manager and that isn't currently enabled on minikube.为了使其与本地存储一起使用,它需要控制器管理器上的以下标志,并且目前在 minikube 上未启用。

--enable-hostpath-provisioner[=false]: Enable HostPath PV provisioning when running without a cloud provider. --enable-hostpath-provisioner[=false]:在没有云提供商的情况下运行时启用 HostPath PV 配置。 This allows testing and development of provisioning features.这允许测试和开发配置功能。 HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development. HostPath 配置不受任何支持,不能在多节点集群中工作,并且不应用于测试或开发以外的任何其他用途。

Reference: http://kubernetes.io/docs/admin/kube-controller-manager/参考: http : //kubernetes.io/docs/admin/kube-controller-manager/

For local development/testing, it would work if you were to use hack/local_up_cluster.sh to start a local cluster, after setting an environment variable:对于本地开发/测试,如果您在设置环境变量后使用hack/local_up_cluster.sh启动本地集群,它将起作用:

export ENABLE_HOSTPATH_PROVISIONER=true 

您应该能够在最新版本的 minikube 中使用 PetSets,因为它使用 kubernetes v1.4.1 作为默认版本。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM