简体   繁体   English

使用 PySpark 结构化流将 Kafka Stream 接收到 MongoDB

[英]Sink Kafka Stream to MongoDB using PySpark Structured Streaming

My Spark:我的火花:

spark = SparkSession\
    .builder\
    .appName("Demo")\
    .master("local[3]")\
    .config("spark.streaming.stopGracefullyonShutdown", "true")\
    .config('spark.jars.packages','org.mongodb.spark:mongo-spark-connector_2.12:3.0.1')\
    .getOrCreate()

Mongo URI:蒙戈 URI:

input_uri_weld = 'mongodb://127.0.0.1:27017/db.coll1'
output_uri_weld = 'mongodb://127.0.0.1:27017/db.coll1'

Function for writing stream batches to Mongo: Function 用于将 stream 批量写入 Mongo:

def save_to_mongodb_collection(current_df, epoc_id, mongodb_collection_name):
    current_df.write\
      .format("com.mongodb.spark.sql.DefaultSource") \
      .mode("append") \
      .option("spark.mongodb.output.uri", output_uri_weld) \
      .save()

Kafka Stream:卡夫卡 Stream:

kafka_df = spark.readStream\
.format("kafka")\
.option("kafka.bootstrap.servers", kafka_broker)\
.option("subscribe", kafka_topic)\
.option("startingOffsets", "earliest")\
.load()

Write to Mongo:写信给蒙哥:

mongo_writer = df_parsed.write\
        .format('com.mongodb.spark.sql.DefaultSource')\
        .mode('append')\
        .option("spark.mongodb.output.uri", output_uri_weld)\
        .save()

& my spark.conf file: &我的 spark.conf 文件:

spark.jars.packages                org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1,org.apache.spark:spark-avro_2.12:3.0.1,com.datastax.spark:spark-cassandra-connector_2.12:3.0.0

Error:错误:

java.lang.ClassNotFoundException: Failed to find data source: com.mongodb.spark.sql.DefaultSource. Please find packages at http://spark.apache.org/third-party-projects.html  

I found a solution.我找到了解决方案。 Since I couldn't find the right Mongo driver for Structured Streaming, I worked on another solution.由于我找不到适合结构化流的 Mongo 驱动程序,因此我研究了另一种解决方案。 Now, I use the direct connection to mongoDb, and use "foreach(...)" instead of foreachbatch(...).现在,我使用与 mongoDb 的直接连接,并使用“foreach(...)”而不是 foreachbatch(...)。 My code looks like this in testSpark.py file:我的代码在 testSpark.py 文件中如下所示:

....
import pymongo
from pymongo import MongoClient

local_url = "mongodb://localhost:27017"


def write_machine_df_mongo(target_df):

    cluster = MongoClient(local_url)
    db = cluster["test_db"]
    collection = db.test1

    post = {
            "machine_id": target_df.machine_id,
            "proc_type": target_df.proc_type,
            "sensor1_id": target_df.sensor1_id,
            "sensor2_id": target_df.sensor2_id,
            "time": target_df.time,
            "sensor1_val": target_df.sensor1_val,
            "sensor2_val": target_df.sensor2_val,
            }

    collection.insert_one(post)

machine_df.writeStream\
    .outputMode("append")\
    .foreach(write_machine_df_mongo)\
    .start()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM