简体   繁体   中英

Strimzi Kafka on Kubernetes local bare metal

I have a kubernetes cluster running on multiple local (bare metal/physcal) machines. I want to deploy kafka on the cluster, but I can't figure out how to use strimzi with my configuration.

I tried to follow the tutorial on the quickstart page : https://strimzi.io/docs/quickstart/master/
Got my zookeeper pods pending at point 2.4. Creating a cluster 2.4. Creating a cluster :

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims

I usually use hostpath for my volumes, I don't know what's going on with this...

EDIT : I tried to create a StorageClass using Arghya Sadhu's commands, but the problem still there.
The description of my PVC :

kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name:          data-my-cluster-zookeeper-0
Namespace:     my-kafka-project
StorageClass:  local-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=my-cluster
               app.kubernetes.io/managed-by=strimzi-cluster-operator
               app.kubernetes.io/name=strimzi
               strimzi.io/cluster=my-cluster
               strimzi.io/kind=Kafka
               strimzi.io/name=my-cluster-zookeeper
Annotations:   strimzi.io/delete-claim: false
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    my-cluster-zookeeper-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  WaitForFirstConsumer  72s (x66 over 16m)  persistentvolume-controller  waiting for first consumer to be created before binding

And my pod:

kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name:           my-cluster-zookeeper-0
Namespace:      my-kafka-project
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=my-cluster
                app.kubernetes.io/managed-by=strimzi-cluster-operator
                app.kubernetes.io/name=strimzi
                controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
                statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
                strimzi.io/cluster=my-cluster
                strimzi.io/kind=Kafka
                strimzi.io/name=my-cluster-zookeeper
Annotations:    strimzi.io/cluster-ca-cert-generation: 0
                strimzi.io/generation: 0
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/my-cluster-zookeeper
Containers:
  zookeeper:
    Image:      strimzi/kafka:0.15.0-kafka-2.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/kafka/zookeeper_run.sh
    Liveness:   exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:          1
      ZOOKEEPER_METRICS_ENABLED:     false
      STRIMZI_KAFKA_GC_LOG_ENABLED:  false
      KAFKA_HEAP_OPTS:               -Xms128M
      ZOOKEEPER_CONFIGURATION:       autopurge.purgeInterval=1
                                     tickTime=2000
                                     initLimit=5
                                     syncLimit=2

    Mounts:
      /opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
  tls-sidecar:
    Image:       strimzi/kafka:0.15.0-kafka-2.3.1
    Ports:       2888/TCP, 3888/TCP, 2181/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /opt/stunnel/zookeeper_stunnel_run.sh
    Liveness:   exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:   1
      TLS_SIDECAR_LOG_LEVEL:  notice
    Mounts:
      /etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
      /etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-cluster-zookeeper-0
    ReadOnly:   false
  zookeeper-metrics-and-logging:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-cluster-zookeeper-config
    Optional:  false
  zookeeper-nodes:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-nodes
    Optional:    false
  cluster-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-cluster-ca-cert
    Optional:    false
  my-cluster-zookeeper-token-hgk2b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-token-hgk2b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

You need to have a PersistentVolume fulfilling the constraints of the PersistentVolumeClaim.

Use local storage. Using a local storage class:

$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -

You need to configure a default storageClass in your cluster so that the PersistentVolumeClaim can take the storage from there.

$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Yeah it sounds to me that there is something missing on Kubernetes at infrastructure level. You should provide PersistentVolumes which are used for static assign to PVCs or as already mentioned by Arghya you can provide StorageClasses for dynamic assignment.

In my case I was creating kafka in another namespace my-cluster-kafka but the strimzi operator was on namespace kafka .

So I just created in the same namespace. For test purporse I use a ephemeral storage.

Here the kafla.yaml:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 1
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
      - name: external
        port: 9094
        type: nodeport
        tls: false
    storage:
      type: ephemeral
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
  zookeeper:
    replicas: 1
    storage:
      type: ephemeral
  entityOperator:
    topicOperator: {}
    userOperator: {}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM