[英]Spark in Docker container does not read Kafka input - Structured Streaming
When the Spark job is run locally without Docker via spark-submit
everything works fine.当 Spark 作业通过
spark-submit
在没有 Docker 的情况下在本地运行时,一切正常。 However, running on a docker container results in no output being generated.但是,在 docker 容器上运行不会生成任何输出。
To see if Kafka itself was working, I extracted Kafka on to the Spark worker container, and make a console consumer listen to the same host, port and topic, (kafka:9092, crypto_topic) which was working correctly and showing output.为了查看 Kafka 本身是否正常工作,我将 Kafka 提取到 Spark 工作容器上,并让控制台消费者监听相同的主机、端口和主题(kafka:9092,crypto_topic),它工作正常并显示输出。 (There's a producer constantly pushing data to the topic in another container)
(有一个生产者不断地向另一个容器中的主题推送数据)
Expected -预期的 -
20/09/11 17:35:27 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.29.10:42565 with 366.3 MB RAM, BlockManagerId(driver, 192.168.29.10, 42565, None)
20/09/11 17:35:27 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.29.10, 42565, None)
20/09/11 17:35:27 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.29.10, 42565, None)
-------------------------------------------
Batch: 0
-------------------------------------------
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
|name_coin|symbol_coin|number_of_markets|volume|market_cap|total_supply|price|percent_change_24hr|timestamp|
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
...
...
...
followed by more output
Actual实际的
20/09/11 14:49:44 INFO BlockManagerMasterEndpoint: Registering block manager d7443d94165c:46203 with 366.3 MB RAM, BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
no more output, stuck here
docker-compose.yml file docker-compose.yml 文件
version: "3"
services:
zookeeper:
image: zookeeper:3.6.1
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
networks:
- crypto-network
kafka:
image: wurstmeister/kafka:2.13-2.6.0
container_name: kafka
hostname: kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_PORT=9092
# topic-name:partitions:in-sync-replicas:cleanup-policy
- KAFKA_CREATE_TOPICS="crypto_topic:1:1:compact"
networks:
- crypto-network
kafka-producer:
image: python:3-alpine
container_name: kafka-producer
command: >
sh -c "pip install -r /usr/src/producer/requirements.txt
&& python3 /usr/src/producer/kafkaProducerService.py"
volumes:
- ./kafkaProducer:/usr/src/producer
networks:
- crypto-network
cassandra:
image: cassandra:3.11.8
container_name: cassandra
hostname: cassandra
ports:
- "9042:9042"
#command:
# cqlsh -f /var/lib/cassandra/cql-queries.cql
volumes:
- ./cassandraData:/var/lib/cassandra
networks:
- crypto-network
spark-master:
image: bde2020/spark-master:2.4.5-hadoop2.7
container_name: spark-master
hostname: spark-master
ports:
- "8080:8080"
- "7077:7077"
- "6066:6066"
networks:
- crypto-network
spark-consumer-worker:
image: bde2020/spark-worker:2.4.5-hadoop2.7
container_name: spark-consumer-worker
environment:
- SPARK_MASTER=spark://spark-master:7077
ports:
- "8081:8081"
volumes:
- ./sparkConsumer:/sparkConsumer
networks:
- crypto-network
networks:
crypto-network:
driver: bridge
spark-submit
is run by spark-submit
由
docker exec -it spark-consumer-worker bash
/spark/bin/spark-submit --master $SPARK_MASTER --class processing.SparkRealTimePriceUpdates \
--packages com.datastax.spark:spark-cassandra-connector_2.11:2.4.3,org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 \
/sparkConsumer/sparkconsumer_2.11-1.0-RELEASE.jar
Relevant parts of Spark code Spark代码的相关部分
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "crypto_topic")
.load()
...
...
...
val queryPrice: StreamingQuery = castedDF
.writeStream
.outputMode("update")
.format("console")
.option("truncate", "false")
.start()
queryPrice.awaitTermination()
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "crypto_topic")
.load()
This part of the code was actually这部分代码实际上是
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", KAFKA_BOOTSTRAP_SERVERS)
.option("subscribe", KAFKA_TOPIC)
.load()
Where KAFKA_BOOTSTRAP_SERVERS
and KAFKA_TOPIC
are read in from a config file while packaging the jar locally.其中
KAFKA_BOOTSTRAP_SERVERS
和KAFKA_TOPIC
在本地打包 jar 时从配置文件中读取。
The best way to debug for me was to set the logs to be more verbose.对我来说最好的调试方法是将日志设置得更详细。
Locally, the value of KAFKA_BOOTSTRAP_SERVERS
was localhost:9092
, but in the Docker container it was changed to kafka:9092
in the config file there.在本地,
KAFKA_BOOTSTRAP_SERVERS
的值是localhost:9092
,但在 Docker 容器中,它在那里的配置文件中更改为kafka:9092
。 This however, didn't reflect as the JAR was packaged already.然而,这并没有反映出来,因为 JAR 已经打包了。 So changing the value to
kafka:9092
while packaging locally fixed it.因此在本地打包时将值更改为
kafka:9092
修复了它。
I would appreciate any help about how to have a JAR pick up configurations dynamically though.我将不胜感激有关如何让 JAR 动态获取配置的任何帮助。 I don't want to package via SBT on the Docker container.
我不想在 Docker 容器上通过 SBT 打包。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.