简体   繁体   中英

PySpark : Write Spark Dataframe to Kafka Topic

Am trying to load dataframe to Kafka Topic. Am getting error on selecting the key and value. Any suggestion would be helpful.

Below is my code,

data = spark.sql('select * from job')

kafka = data.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")\
    .writeStream.outputMode(outputMode='Append').format('kafka')\
    .option("kafka.bootstrap.servers", "localhost:9092")\
    .option("topic", "Jim_Topic")\
    .option("checkpointLocation", "C:/Hadoop/Data/CheckPointLocation/")\
    .start()

kafka.awaitTermination()

Below is the error,

pyspark.sql.utils.AnalysisException: cannot resolve '`key`' given input columns: [job.JOB_ID, 
job.JOB_TITLE, job.MAX_SALARY, job.MIN_SALARY]; line 1 pos 5;
'Project [unresolvedalias(cast('key as string), None), unresolvedalias(cast('value as string), None)]
+- Project [JOB_ID#0, JOB_TITLE#1, MIN_SALARY#2, MAX_SALARY#3]
   +- SubqueryAlias `job`
      +- StreamingRelation

DataSource(org.apache.spark.sql.SparkSession@1f3fc47a,csv,List(),Some(StructType(StructField(JOB_ID,StringType,true), StructField(JOB_TITLE,StringType,true), StructField(MIN_SALARY,StringType,true), StructField(MAX_SALARY,StringType,true))),List(),None,Map(sep ->,, header -> false, path -> C:/Hadoop/Data/Job*.csv),None), FileSource[C:/Hadoop/Data/Job*.csv], [JOB_ID#0, JOB_TITLE#1, MIN_SALARY#2, MAX_SALARY#3]

Tried converting the values into json, it worked perfectly. Now am able to send the messages from spark stream to kafka,

kafka = data.selectExpr("CAST(JOB_ID AS STRING) AS key", "to_json(struct(*)) AS value")\

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM