简体   繁体   中英

How to generate large word count file in Spark?

I want to generate 10 million lines' wordcount file for performance test(each line has the same sentence). But I have no idea about how to code it.

You can give me an example code, and save file in HDFS directly.

You can try something like this.

Generate 1 column with values from 1 to 100k and one with values from 1 to 100 explode both of them with explode(column). You can't generate one column with 10 Mil values because kryo buffer is gonna throw an error.

I don't know if this is the best performance way to do it, but it is the fastest way I can think right now.

val generateList = udf((s: Int) => {
    val buf = scala.collection.mutable.ArrayBuffer.empty[Int]
    for(i <- 1 to s) {
        buf += i
    }
    buf
})

val someDF = Seq(
  ("Lorem ipsum dolor sit amet, consectetur adipiscing elit.")
).toDF("sentence")

val someDfWithMilColumn = someDF.withColumn("genColumn1", generateList(lit(100000)))
   .withColumn("genColumn2", generateList(lit(100)))
val someDfWithMilColumn100k  = someDfWithMilColumn
   .withColumn("expl_val", explode($"mil")).drop("expl_val", "genColumn1")
val someDfWithMilColumn10mil = someDfWithMilColumn100k
   .withColumn("expl_val2", explode($"10")).drop("genColumn2", "expl_val2")

someDfWithMilColumn10mil.write.parquet(path)

You can do it by joining the 2 DFs as below, Also find the code explanation inline.

import org.apache.spark.sql.SaveMode

object GenerateTenMils {

  def main(args: Array[String]): Unit = {
    val spark = Constant.getSparkSess
    spark.conf.set("spark.sql.crossJoin.enabled","true") // Enable cross join
    import spark.implicits._

    //Create a DF with your sentence
    val df = List("each line has the same sentence").toDF

    //Create another Dataset with 10000000 records
    spark.range(10000000)
      .join(df)    // Cross Join the dataframes
      .coalesce(1)  // Output to a single file
      .drop("id")       // Drop the extra column
      .write
      .mode(SaveMode.Overwrite)
      .text("src/main/resources/tenMils") // Write as text file
  }

}

You could follow this approach.

Tail recursive to generate the objects list and Dataframes, and Union to generate the big Dataframe

  val spark = SparkSession
    .builder()
    .appName("TenMillionsRows")
    .master("local[*]")
    .config("spark.sql.shuffle.partitions","4") //Change to a more reasonable default number of partitions for our data
    .config("spark.app.id","TenMillionsRows") // To silence Metrics warning
    .getOrCreate()

  val sc = spark.sparkContext

    import spark.implicits._

    /**
      * Returns a List of nums sentences
      * @param sentence
      * @param num
      * @return
      */
    def getList(sentence: String, num: Int) : List[String] = {
      @tailrec
      def loop(st: String,n: Int, acc: List[String]): List[String] = {
        n match {
          case num if num == 0 => acc
          case _ => loop(st, n - 1, st :: acc)
        }
      }
      loop(sentence,num,List())
    }

    /**
      * Returns a Dataframe that is the union of nums dataframes
      * @param lst
      * @param num
      * @return
      */
    def getDataFrame(lst: List[String], num: Int): DataFrame = {
      @tailrec
      def loop (ls: List[String],n: Int, acc: DataFrame): DataFrame = {
        n match {
          case n if n == 0 => acc
          case _ => loop(lst,n - 1, acc.union(sc.parallelize(ls).toDF("sentence")))
        }
      }
      loop(lst, num, sc.parallelize(List(sentence)).toDF("sentence"))
    }

      val sentence = "hope for the best but prepare for the worst"
      val lSentence = getList(sentence, 100000)
      val dfs = getDataFrame(lSentence,100)

      println(dfs.count())
      // output: 10000001
      dfs.write.orc("path_to_hdfs") // write dataframe to a orc file
      // you can save the file as parquet, txt, json ....... 
      // with dataframe.write

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM