[英]Error: pod has unbound immediate PersistentVolumeClaims
I am trying to run kafka with kubeless but I get this error pod has unbound immediate PersistentVolumeClaims.我正在尝试使用 kubeless 运行 kafka,但我收到此错误 pod has unbound直接 PersistentVolumeClaims。 I have created a persistent volume using rook and ceph and trying to use this perisistent volume with kubeless kafka.
我已经使用 rook 和 ceph 创建了一个持久卷,并尝试将这个持久卷与 kubeless kafka 一起使用。 However when I run the code I get "pod has unbound persistent volume claims"
但是,当我运行代码时,我得到“pod 具有未绑定的持久卷声明”
What am I doing wrong here?我在这里做错了什么?
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: datadir
labels:
kubeless: kafka
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
labels:
kubeless: zookeeper
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: kubeless/kafka-trigger-controller:v1.0.2
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
clusterIP: None
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumeClaimTemplates:
- metadata:
name: zookeeper
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
vagrant@ubuntu-xenial:~/infra/ansible/scripts/kubeless-kafka-trigger$ kubectl get pod -n kubeless
NAME READY STATUS RESTARTS AGE
kafka-0 0/1 Pending 0 8m44s
kafka-trigger-controller-7cbd54b458-pccpn 1/1 Running 0 8m47s
kubeless-controller-manager-5bcb6757d9-nlksd 3/3 Running 0 3h34m
zoo-0 0/1 Pending 0 8m42s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x10 over 10m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times
kubectl describe pod kafka-0 -n kubeless kubectl 描述 pod kafka-0 -n kubeless
Name: kafka-0
Namespace: kubeless
Priority: 0
Node: <none>
Labels: controller-revision-hash=kafka-c498d7f6
kubeless=kafka
statefulset.kubernetes.io/pod-name=kafka-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/kafka
Init Containers:
volume-permissions:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
chmod -R g+rwX /bitnami
Environment: <none>
Mounts:
/bitnami/kafka/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Containers:
broker:
Image: bitnami/kafka:1.1.0-r0
Port: 9092/TCP
Host Port: 0/TCP
Liveness: tcp-socket :9092 delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
KAFKA_ADVERTISED_HOST_NAME: broker.kubeless
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9092
KAFKA_DELETE_TOPIC_ENABLE: true
KAFKA_ZOOKEEPER_CONNECT: zookeeper.kubeless:2181
ALLOW_PLAINTEXT_LISTENER: yes
Mounts:
/bitnami/kafka/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-kafka-0
ReadOnly: false
default-token-wj8vx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wj8vx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x10 over 10m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I got it working.. For someone who faces the same problem this would be useful..我让它工作了..对于面临同样问题的人来说,这将是有用的..
This uses rook-ceph storage kubeless kafka这使用 rook-ceph 存储 kubeless kafka
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kafka
namespace: kubeless
labels:
kubeless: kafka
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
namespace: kubeless
labels:
kubeless: zookeeper
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: kubeless/kafka-trigger-controller:v1.0.2
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: kafka
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: kafka
volumes:
- name: kafka
persistentVolumeClaim:
claimName: kafka
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
clusterIP: None
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumes:
- name: zookeeper
persistentVolumeClaim:
claimName: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
Got the same error in my minikube.在我的 minikube 中遇到了同样的错误。 Forgot to create volumes for my statefulSets.
忘记为我的 statefulSet 创建卷。
Created PVC.创造了PVC。 Need to pay attention to storageClassName, check througt availiable (i did it at dashboard).
需要注意storageClassName,检查可用的througt(我在仪表板上做了)。
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "XXXX",
"namespace": "kube-public",
"labels": {
"kubeless": "XXXX"
}
},
"spec": {
"storageClassName": "hostpath",
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
}
}
}
I got persistence volumes.我得到了持久卷。 Then i edited statefulSet:
然后我编辑了 statefulSet:
"volumes": [
{
"name": "XXX",
"persistentVolumeClaim": {
"claimName": "XXX"
}
}
Added "persistentVolumeClaim" attribute, dropped pod, waited until new pod created.添加了“persistentVolumeClaim”属性,删除了 pod,等待新的 pod 创建。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.