简体   繁体   中英

Unable to create Persistent Volume Claim for PetSet on CoreOS

Trying to set up PetSet using Kube-Solo

In my local dev environment, I have set up Kube-Solo with CoreOS. I'm trying to deploy a Kubernetes PetSet that includes a Persistent Volume Claim Template as part of the PetSet configuration. This configuration fails and none of the pods are ever started. Here is my PetSet definition:

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: marklogic
spec:
  serviceName: "ml-service"
  replicas: 2
  template:
    metadata:
      labels:
        app: marklogic
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 30
      containers:
        - name: 'marklogic'
          image: {ip address of repo}:5000/dcgs-sof/ml8-docker-final:v1
          imagePullPolicy: Always
          command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
          ports:
            - containerPort: 7997
              name: health-check
            - containerPort: 8000
              name: app-services
            - containerPort: 8001
              name: admin
            - containerPort: 8002
              name: manage
            - containerPort: 8040
              name: sof-sdl
            - containerPort: 8041
              name: sof-sdl-xcc
            - containerPort: 8042
              name: ml8042
            - containerPort: 8050
              name: sof-sdl-admin
            - containerPort: 8051
              name: sof-sdl-cache
            - containerPort: 8060
              name: sof-sdl-camel
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          lifecycle:
            preStop:
              exec:
                command: ["/etc/init.d/MarkLogic stop"]
          volumeMounts:
            - name: ml-data
              mountPath: /var/opt/MarkLogic 
  volumeClaimTemplates:
    - metadata:
        name: ml-data
         annotations:
           volume.alpha.kubernetes.io/storage-class: anything
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 1Gi

In the Kubernetes dashboard, I see the following error message:

SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "ml-data-marklogic-0", which is unexpected.

It seems that being unable to create the Persistent Volume Claim is also preventing the image from ever being pulled from my local repository. Additionally, the Kubernetes Dashboard shows the request for the Persistent Volume Claims, but the state is continuously "pending". I have verified the issue is with the Persistent Volume Claim. If I remove that from the PetSet configuration the deployment succeeds.

I should note that I was using MiniKube prior to this and would see the same message, but once the image was pulled and the pod(s) started the claim would take hold and the message would go away.

I am using

  • Kubernetes version: 1.4.0
  • Docker version: 1.12.1 (on my mac) & 1.10.3 (inside the CoreOS vm)
  • Corectl version: 0.2.8
  • Kube-Solo version: 0.9.6

I am not familiar with kube-solo.

However, the issue here might be that you are attempting to use a feature, dynamic volume provisioning , which is in beta, which does not have specific support for volumes in your environment.

The best way around this would be to create the persistent volumes that it expects to find manually, so that the PersistentVolumeClaim can find them.

The same error happened to me and found clues about the following config (considering volumeClaimTemplates and StorageClass ) at the slack group and this pull request

volumeClaimTemplates:
  - metadata:
      name: cassandra-data
      annotations:
        volume.beta.kubernetes.io/storage-class: standard
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  namespace: kube-system
  name: standard
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"

provisioner: kubernetes.io/host-path

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM