I am trying to convert my dockerised application for testing a Kafka functionality, to a Kubernetes deployment file.
The docker command for execution of the container which is working as expected is:
docker run --name consumer-1 --network="host" -dt 56d57e1538d3 pizzaapp_multiconsumer1.py bash
However, when converting it to the below Kubernetes deployment file and executing it, I am getting a CrashLoopBackOff error on the pods.
spec:
hostNetwork: true
containers:
- name: kafka-consumer
image: bhuvidockerhub/kafkaproject:v1.0
imagePullPolicy: IfNotPresent
args: ["pizzaapp_multiconsumer1.py", "bash"]
imagePullSecrets:
- name: regcred
On checking the logs of the failed pods I am seeing this error:
Traceback (most recent call last):
File "//pizzaapp_multiconsumer1.py", line 12, in <module>
multiconsume_pizza_messages()
File "/testconsumer1.py", line 14, in multiconsume_pizza_messages
kafka_admin_client: KafkaAdminClient = KafkaAdminClient(
File "/usr/local/lib/python3.9/site-packages/kafka/admin/client.py", line 208, in __init__
self._client = KafkaClient(metrics=self._metrics,
File "/usr/local/lib/python3.9/site-packages/kafka/client_async.py", line 244, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python3.9/site-packages/kafka/client_async.py", line 900, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
But the broker container is already up and running
my-cluster-with-metrics-entity-operator-7d8894b79f-99fwt 3/3 Running 181 27d
my-cluster-with-metrics-kafka-0 1/1 Running 57 19d
my-cluster-with-metrics-kafka-1 1/1 Running 5 19h
my-cluster-with-metrics-kafka-2 1/1 Running 0 27m
my-cluster-with-metrics-kafka-exporter-568968bd5c-mrg7f 1/1 Running 108 27d
and the corresponding services are also there
my-cluster-with-metrics-kafka-bootstrap ClusterIP 10.98.78.168 <none> 9091/TCP,9100/TCP 27d
my-cluster-with-metrics-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9100/TCP 27d
my-cluster-with-metrics-kafka-external-0 NodePort 10.110.196.75 <none> 9099:30461/TCP 27d
my-cluster-with-metrics-kafka-external-1 NodePort 10.107.225.187 <none> 9099:32310/TCP 27d
my-cluster-with-metrics-kafka-external-2 NodePort 10.103.99.151 <none> 9099:31950/TCP 27d
my-cluster-with-metrics-kafka-external-bootstrap NodePort 10.98.131.151 <none> 9099:31248/TCP 27d
And I have port forwarded the svc port so that the brokers can be found:
kubectl port-forward svc/my-cluster-with-metrics-kafka-external-bootstrap 9099:9099 -n kafka
And post this when I run the docker command it executes, as expected.
But in K8s even after adding the bash in the args and trying, it still gives no brokers available.
Can anyone suggest what changes shall I try out in the deployment file, so that it works exactly as the successful docker command run as stated above?
If an application is deployed in K8s, then we don't need port forwarding since there is nothing to expose outside the cluster. When we run things inside K8s, we normally do not access things using localhost. Localhost refers to the pod's container itself. Therefore, to resolve the above issue, completely removed the localhost reference from the bootstrap server. This was then replaced with the external bootstrap service IP [10.XXX:9099] and then executed the K8s deployment file. Following which the producer and consumer pods came up successfully, and this resolved the issue.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.