简体   繁体   English

是否可以将 Spark-NLP 库与 Spark Structured Streaming 一起使用?

[英]Is it possible to use the library Spark-NLP with Spark Structured Streaming?

I want to perform tweets sentiment analysis on a stream of messages I get from a Kafka cluster that, in turn, gets the tweets from the Twitter API v2.我想对从 Kafka 集群获得的消息流执行推文情感分析,然后从 Twitter API v2 获取推文。

When I try to apply the pre-trained sentiment analysis pipeline I get an error message saying: Exception: target must be either a spark DataFrame, a list of strings or a string , and I'd like to know if there is a way to work around this.当我尝试应用预训练的情绪分析管道时,我收到一条错误消息: Exception: target must be either a spark DataFrame, a list of strings or a string ,我想知道是否有办法解决这个问题。

I've checked the documentation and I couldn't find anything on streaming data.我检查了文档,但在流数据上找不到任何内容。

This is the code I'm using:这是我正在使用的代码:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, split, col, from_json, from_unixtime, unix_timestamp
from pyspark.sql.types import StructType, StructField, IntegerType, StringType, DoubleType, TimestampType, MapType, ArrayType
from sparknlp.pretrained import PretrainedPipeline

spark = SparkSession.builder.appName('twitter_app')\
    .master("local[*]")\
    .config('spark.jars.packages', 
            'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1,com.johnsnowlabs.nlp:spark-nlp-spark32_2.12:3.4.2')\
    .config('spark.streaming.stopGracefullyOnShutdown', 'true')\
    .config("spark.driver.memory","8G")\
    .config("spark.driver.maxResultSize", "0") \
    .config("spark.kryoserializer.buffer.max", "2000M")\
    .getOrCreate()

schema = StructType() \
  .add("data", StructType() \
    .add("created_at", TimestampType())
    .add("id", StringType()) \
    .add("text", StringType())) \
  .add("matching_rules", ArrayType(StructType() \
                                   .add('id', StringType()) \
                                   .add('tag', StringType())))

kafka_df = spark.readStream \
          .format("kafka") \
          .option("kafka.bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094") \
          .option("subscribe", "Zelensky,Putin,Biden,NATO,NoFlyZone") \
          .option("startingOffsets", "latest") \
          .load() \
          .select((from_json(col("value").cast("string"), schema)).alias('text'), 
                   col('topic'), col('key').cast('string'))

nlp_pipeline = PretrainedPipeline("analyze_sentimentdl_use_twitter", lang='en')

df = kafka_df.select('key',
                     col('text.data.created_at').alias('created_at'),
                     col('text.data.text').alias('text'), 
                     'topic') \
             .withColumn('sentiment', nlp_pipeline.annotate(col('text.data.text')))

And then I get the error I mentioned before:然后我得到了我之前提到的错误:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Input In [11], in <cell line: 1>()
      1 df = kafka_df.select('key',
      2                      col('text.data.created_at').alias('created_at'),
      3                      col('text.data.text').alias('text'), 
      4                      'topic') \
----> 5              .withColumn('sentiment', nlp_pipeline.annotate(col('text.data.text')))

File ~/.local/share/virtualenvs/spark_home_lab-iuwyZNhT/lib/python3.9/site-packages/sparknlp/pretrained.py:183, in PretrainedPipeline.annotate(self, target, column)
    181     return pipeline.annotate(target)
    182 else:
--> 183     raise Exception("target must be either a spark DataFrame, a list of strings or a string")

Exception: target must be either a spark DataFrame, a list of strings or a string

Maybe it's not possible using Spark-NLP for streaming data?也许不可能使用 Spark-NLP 处理流数据?

You could try nlp_pipeline.transform(kafka_df) in the following way:您可以通过以下方式尝试nlp_pipeline.transform(kafka_df)

text_df = kafka_df.select('key',
                          col('text.data.created_at').alias('created_at'),
                          col('text.data.text').alias('text'), 
                          'topic')
df = (nlp_pipeline
      .transform(text_df)
      .select('key', 'created_at', 'text', 'topic', 'sentiment.result')
      )

df will be a structured stream you are looking for. df将是您正在寻找的结构化流。

Because Spark-NLP is based on Spark ML, you can treat a structured stream kafka_df as a DataFrame.因为 Spark-NLP 基于 Spark ML,所以可以将结构化流kafka_df视为 DataFrame。 nlp_pipeline is a pyspark.ml.Pipeline . nlp_pipeline是一个pyspark.ml.Pipeline And a working way to use Pipeline for prediction is to call .transform(kafka_df) .使用Pipeline进行预测的一种工作方式是调用.transform(kafka_df)

Here is an example of how Spark NLP creators built the pipeline you used https://nlp.johnsnowlabs.com/2021/01/18/sentimentdl_use_twitter_en.html这是 Spark NLP 创建者如何构建您使用的管道的示例https://nlp.johnsnowlabs.com/2021/01/18/sentimentdl_use_twitter_en.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM