[英]Kafka “partition.assignment.strategy” in Pyspark
我正在尝试读取数据以将其转换为 Dataframe 并且我的软件的当前版本如下:
卡夫卡正在工作,我存储了以下数据,我正在尝试读取:
~/development/kafka_home/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testtopic --from-beginning
{"transaction_id": "1", "transaction_card_type": "Visa", "transaction_amount": 181.76, "transaction_datetime": "2021-01-25 15:44:44"}
{"transaction_id": "2", "transaction_card_type": "MasterCard", "transaction_amount": 228.62, "transaction_datetime": "2021-01-25 15:44:45"}
{"transaction_id": "3", "transaction_card_type": "Visa", "transaction_amount": 483.48, "transaction_datetime": "2021-01-25 15:44:46"}
{"transaction_id": "4", "transaction_card_type": "MasterCard", "transaction_amount": 477.87, "transaction_datetime": "2021-01-25 15:44:47"}
{"transaction_id": "5", "transaction_card_type": "MasterCard", "transaction_amount": 304.52, "transaction_datetime": "2021-01-25 15:44:48"}
{"transaction_id": "1", "transaction_card_type": "MasterCard", "transaction_amount": 346.99, "transaction_datetime": "2021-01-25 16:38:44"}
{"transaction_id": "2", "transaction_card_type": "Maestro", "transaction_amount": 384.33, "transaction_datetime": "2021-01-25 16:38:45"}
{"transaction_id": "3", "transaction_card_type": "MasterCard", "transaction_amount": 394.95, "transaction_datetime": "2021-01-25 16:38:46"}
{"transaction_id": "4", "transaction_card_type": "Visa", "transaction_amount": 22.75, "transaction_datetime": "2021-01-25 16:38:47"}
{"transaction_id": "5", "transaction_card_type": "MasterCard", "transaction_amount": 492.01, "transaction_datetime": "2021-01-25 16:38:48"}
我在 PySpark 中执行以下代码
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
KAFKA_TOPIC_NAME_CONS = "testtopic"
KAFKA_BOOTSTRAP_SERVERS_CONS = 'localhost:9092'
spark = SparkSession \
.builder \
.appName("PySpark Structured Streaming with Kafka Demo") \
.config("spark.jars", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/kafka-clients-1.1.0.jar") \
.config("spark.jars", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-8-assembly_2.11-2.4.7.jar") \
.config("spark.jars", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-sql-kafka-0-10_2.11-2.4.7.jar") \
.config("spark.executor.extraClassPath", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/kafka-clients-1.1.0.jar") \
.config("spark.executor.extraClassPath", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-8-assembly_2.11-2.4.7.jar") \
.config("spark.executor.extraClassPath", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-sql-kafka-0-10_2.11-2.4.7.jar") \
.config("spark.driver.extraClassPath", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/kafka-clients-1.1.0.jar") \
.config("spark.driver.extraClassPath", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-8-assembly_2.11-2.4.7.jar") \
.config("spark.driver.extraClassPath", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-sql-kafka-0-10_2.11-2.4.7.jar") \
.config("spark.executor.extraLibrary", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/kafka-clients-1.1.0.jar") \
.config("spark.executor.extraLibrary", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-8-assembly_2.11-2.4.7.jar") \
.config("spark.executor.extraLibrary", "/home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-sql-kafka-0-10_2.11-2.4.7.jar") \
.getOrCreate()
df = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "testtopic").load()
ds = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
query = ds \
.writeStream \
.queryName("tableName") \
.option("partition.assignment.strategy", "range")
.format("console") \
.start()
我得到的错误如下:
21/01/25 18:53:41 WARN kafka010.KafkaOffsetReader:尝试 1 获取 Kafka 偏移量时出错:org.apache.kafka.common.config.ConfigException:缺少所需的配置“partition.assignment.strategy”,没有默认值.
我做了一些研究,他们说名为“kafka-clients-1.1.0.jar”的.jar 文件似乎是问题所在,但是我已经训练了 2.6.0 和 1.1.0 版本,结果相同。
**
**
我在“spark-defaults”中添加了以下内容
spark.jars /home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-10_2.12-2.4.7.jar
spark.executor.extraClassPath /home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-10_2.12-2.4.7.jar
spark.driver.extraClassPath /home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-10_2.12-2.4.7.jar
spark.executor.extraLibrary /home/bupry_dev/development/spark_home/spark-2.4.7-bin-hadoop2.7/jars/spark-streaming-kafka-0-10_2.12-2.4.7.jar
并通过以下方式创建我的 Session:
spark = SparkSession \
.builder \
.appName("PySpark Structured Streaming with Kafka Demo") \
.getOrCreate()
我仍然收到以下错误:
java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.kafka010.KafkaSourceProvider could not be instantiated
对于这行代码:
df = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "testtopic").load()
如Spark docs中所述,您只需要包含以下依赖项:
groupId = org.apache.spark
artifactId = spark-streaming-kafka-0-10_2.11
version = 2.4.7 <-- replace this by your appropriate Spark version
Spark 警告不要直接使用kafka-clients*.jar
,因为它已经包含了那些 jars,并且为同一个库添加多个 jars 会使调试更加困难。
不要手动添加对 org.apache.kafka 工件(例如 kafka-clients)的依赖。 spark-streaming-kafka-0-10 工件已经具有适当的传递依赖关系,并且不同的版本可能以难以诊断的方式不兼容。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.