简体   繁体   中英

Word Count using Spark Structured Streaming with Python

I'm very new to Spark. This example is extracted from Structured Streaming Programming Guide of Spark:

from pyspark.sql import SparkSession
from pyspark.sql.functions import explode
from pyspark.sql.functions import split

spark = SparkSession \
            .builder \
            .appName("StructuredNetworkWordCount") \
            .getOrCreate()

# Create DataFrame representing the stream of input lines from connection to localhost:9999
       lines = spark \
         .readStream \
         .format("socket") \
         .option("host", "localhost") \
         .option("port", 9999) \
         .load()

# Split the lines into words
      words = lines.select(
        explode(
   split(lines.value, " ")
   ).alias("word")
   )

 # Generate running word count
     wordCounts = words.groupBy("word").count()

 # Start running the query that prints the running counts to the console
    query = wordCounts \
          .writeStream \
          .outputMode("complete") \
          .format("console") \
          .start()

query.awaitTermination()

I need to modify this code to count the words that start with letter "B" and having more than 6 counts. How can I do it?

The solution is:

wordCountsDF = wordsDF.groupBy('word').count().where('word.startsWith("B")' and 'count > 6')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM