簡體   English   中英

如何在 Spark 中生成大字數文件?

[英]How to generate large word count file in Spark?

我想為性能測試生成 1000 萬行的字數文件(每行都有相同的句子)。 但我不知道如何編碼。

您可以給我一個示例代碼,並直接將文件保存在 HDFS 中。

你可以嘗試這樣的事情。

生成 1 列,其值從 1 到 100k 和 1 列,值從 1 到 100 用explode(column) 將它們都炸開。 您無法生成一列具有 10 Mil 值的列,因為 kryo 緩沖區會引發錯誤。

我不知道這是否是最好的性能方式,但這是我現在能想到的最快方式。

val generateList = udf((s: Int) => {
    val buf = scala.collection.mutable.ArrayBuffer.empty[Int]
    for(i <- 1 to s) {
        buf += i
    }
    buf
})

val someDF = Seq(
  ("Lorem ipsum dolor sit amet, consectetur adipiscing elit.")
).toDF("sentence")

val someDfWithMilColumn = someDF.withColumn("genColumn1", generateList(lit(100000)))
   .withColumn("genColumn2", generateList(lit(100)))
val someDfWithMilColumn100k  = someDfWithMilColumn
   .withColumn("expl_val", explode($"mil")).drop("expl_val", "genColumn1")
val someDfWithMilColumn10mil = someDfWithMilColumn100k
   .withColumn("expl_val2", explode($"10")).drop("genColumn2", "expl_val2")

someDfWithMilColumn10mil.write.parquet(path)

您可以通過加入下面的 2 個 DF 來做到這一點,還可以找到內聯的代碼說明。

import org.apache.spark.sql.SaveMode

object GenerateTenMils {

  def main(args: Array[String]): Unit = {
    val spark = Constant.getSparkSess
    spark.conf.set("spark.sql.crossJoin.enabled","true") // Enable cross join
    import spark.implicits._

    //Create a DF with your sentence
    val df = List("each line has the same sentence").toDF

    //Create another Dataset with 10000000 records
    spark.range(10000000)
      .join(df)    // Cross Join the dataframes
      .coalesce(1)  // Output to a single file
      .drop("id")       // Drop the extra column
      .write
      .mode(SaveMode.Overwrite)
      .text("src/main/resources/tenMils") // Write as text file
  }

}

你可以按照這種方法。

尾遞歸生成對象列表和數據幀,聯合生成大 Dataframe

  val spark = SparkSession
    .builder()
    .appName("TenMillionsRows")
    .master("local[*]")
    .config("spark.sql.shuffle.partitions","4") //Change to a more reasonable default number of partitions for our data
    .config("spark.app.id","TenMillionsRows") // To silence Metrics warning
    .getOrCreate()

  val sc = spark.sparkContext

    import spark.implicits._

    /**
      * Returns a List of nums sentences
      * @param sentence
      * @param num
      * @return
      */
    def getList(sentence: String, num: Int) : List[String] = {
      @tailrec
      def loop(st: String,n: Int, acc: List[String]): List[String] = {
        n match {
          case num if num == 0 => acc
          case _ => loop(st, n - 1, st :: acc)
        }
      }
      loop(sentence,num,List())
    }

    /**
      * Returns a Dataframe that is the union of nums dataframes
      * @param lst
      * @param num
      * @return
      */
    def getDataFrame(lst: List[String], num: Int): DataFrame = {
      @tailrec
      def loop (ls: List[String],n: Int, acc: DataFrame): DataFrame = {
        n match {
          case n if n == 0 => acc
          case _ => loop(lst,n - 1, acc.union(sc.parallelize(ls).toDF("sentence")))
        }
      }
      loop(lst, num, sc.parallelize(List(sentence)).toDF("sentence"))
    }

      val sentence = "hope for the best but prepare for the worst"
      val lSentence = getList(sentence, 100000)
      val dfs = getDataFrame(lSentence,100)

      println(dfs.count())
      // output: 10000001
      dfs.write.orc("path_to_hdfs") // write dataframe to a orc file
      // you can save the file as parquet, txt, json ....... 
      // with dataframe.write

希望這可以幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM