[英]Spark: Writing RDD Results to File System is Slow
我正在使用Scala開發Spark應用程序。 我的應用程序僅包含一個需要改組的操作(即cogroup
)。 它可以在合理的時間完美運行。 我面臨的問題是我想將結果寫回到文件系統中。 由於某些原因,它需要比運行實際程序更長的時間。 最初,我嘗試在不重新分區或合並的情況下編寫結果,但我意識到所生成的文件數量巨大,因此我認為這就是問題所在。 在編寫之前,我嘗試了重新分區(和合並),但是應用程序花了很長時間才能執行這些任務。 我知道重新分區(和合並)的成本很高,但是我在做正確的事情嗎? 如果不是,請您提示我什么是正確的方法。
注意事項 :
PS我嘗試使用數據框,同樣的問題。
這是我的代碼示例,以防萬一:
val sc = spark.sparkContext
// loading the samples
val samplesRDD = sc
.textFile(s3InputPath)
.filter(_.split(",").length > 7)
.map(parseLine)
.filter(_._1.nonEmpty) // skips any un-parsable lines
// pick random samples
val samples1Ids = samplesRDD
.map(_._2._1) // map to id
.distinct
.takeSample(withReplacement = false, 100, 0)
// broadcast it to the cluster's nodes
val samples1IdsBC = sc broadcast samples1Ids
val samples1RDD = samplesRDD
.filter(samples1IdsBC.value contains _._2._1)
val samples2RDD = samplesRDD
.filter(sample => !samples1IdsBC.value.contains(sample._2._1))
// compute
samples1RDD
.cogroup(samples2RDD)
.flatMapValues { case (left, right) =>
left.map(sample1 => (sample1._1, right.filter(sample2 => isInRange(sample1._2, sample2._2)).map(_._1)))
}
.map {
case (timestamp, (sample1Id, sample2Ids)) =>
s"$timestamp,$sample1Id,${sample2Ids.mkString(";")}"
}
.repartition(10)
.saveAsTextFile(s3OutputPath)
UPDATE
這是使用數據框的相同代碼:
// loading the samples
val samplesDF = spark
.read
.csv(inputPath)
.drop("_c1", "_c5", "_c6", "_c7", "_c8")
.toDF("id", "timestamp", "x", "y")
.withColumn("x", ($"x" / 100.0f).cast(sql.types.FloatType))
.withColumn("y", ($"y" / 100.0f).cast(sql.types.FloatType))
// pick random ids as samples 1
val samples1Ids = samplesDF
.select($"id") // map to the id
.distinct
.rdd
.takeSample(withReplacement = false, 1000)
.map(r => r.getAs[String]("id"))
// broadcast it to the executor
val samples1IdsBC = sc broadcast samples1Ids
// get samples 1 and 2
val samples1DF = samplesDF
.where($"id" isin (samples1IdsBC.value: _*))
val samples2DF = samplesDF
.where(!($"id" isin (samples1IdsBC.value: _*)))
samples2DF
.withColumn("combined", struct("id", "lng", "lat"))
.groupBy("timestamp")
.agg(collect_list("combined").as("combined_list"))
.join(samples1DF, Seq("timestamp"), "rightouter")
.map {
case Row(timestamp: String, samples: mutable.WrappedArray[GenericRowWithSchema], sample1Id: String, sample1X: Float, sample1Y: Float) =>
val sample2Info = samples.filter {
case Row(_, sample2X: Float, sample2Y: Float) =>
Misc.isInRange((sample2X, sample2Y), (sample1X, sample1Y), 20)
case _ => false
}.map {
case Row(sample2Id: String, sample2X: Float, sample2Y: Float) =>
s"$sample2Id:$sample2X:$sample2Y"
case _ => ""
}.mkString(";")
(timestamp, sample1Id, sample1X, sample1Y, sample2Info)
case Row(timestamp: String, _, sample1Id: String, sample1X: Float, sample1Y: Float) => // no overlapping samples
(timestamp, sample1Id, sample1X, sample1Y, "")
case _ =>
("error", "", 0.0f, 0.0f, "")
}
.where($"_1" notEqual "error")
// .show(1000, truncate = false)
.write
.csv(outputPath)
這里的問題是通常觸發提交任務,通過重命名文件執行作業以及在S3上重命名確實非常緩慢。 您寫入的數據越多,作業結束所需的時間就越長。 那就是你所看到的。
修復:切換到不進行任何重命名的S3A提交者 。
一些調整選項可大量增加IO中的線程數量,提交和連接池大小fs.s3a.threads.max from 10 to something bigger fs.s3a.committer.threads -number files committed by a POST in parallel; default is 8 fs.s3a.connection.maximum + try (fs.s3a.committer.threads + fs.s3a.threads.max + 10)
fs.s3a.threads.max from 10 to something bigger fs.s3a.committer.threads -number files committed by a POST in parallel; default is 8 fs.s3a.connection.maximum + try (fs.s3a.committer.threads + fs.s3a.threads.max + 10)
這些都相當小,因為許多作業可以使用多個存儲桶,並且如果每個存儲桶都有大量存儲,那么創建s3a客戶端確實非常昂貴...但是,如果您有成千上萬個文件,則可能值得。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.