簡體   English   中英

Spark 1.6:java.lang.IllegalArgumentException:spark.sql.execution.id已設置

[英]Spark 1.6: java.lang.IllegalArgumentException: spark.sql.execution.id is already set

當我運行以下代碼時,我正在使用spark 1.6並遇到上述問題:

// Imports
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SaveMode
import scala.concurrent.ExecutionContext.Implicits.global
import java.util.Properties
import scala.concurrent.Future

// Set up spark on local with 2 threads
val conf = new SparkConf().setMaster("local[2]").setAppName("app")
val sc = new SparkContext(conf)
val sqlCtx = new HiveContext(sc)

// Create fake dataframe
import sqlCtx.implicits._
var df = sc.parallelize(1 to 50000).map { i => (i, i, i, i, i, i, i) }.toDF("a", "b", "c", "d", "e", "f", "g").repartition(2)
// Write it as a parquet file
df.write.parquet("/tmp/parquet1")
df = sqlCtx.read.parquet("/tmp/parquet1")

// JDBC connection
val url = s"jdbc:postgresql://localhost:5432/tempdb"
val prop = new Properties()
prop.setProperty("user", "admin")
prop.setProperty("password", "")

// 4 futures - at least one of them has been consistently failing for
val x1 = Future { df.write.jdbc(url, "temp1", prop) }
val x2 = Future { df.write.jdbc(url, "temp2", prop) }
val x3 = Future { df.write.jdbc(url, "temp3", prop) }
val x4 = Future { df.write.jdbc(url, "temp4", prop) }

這是github要點: https//gist.github.com/karanveerm/27d852bf311e39f05491

我得到的錯誤是:at

org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87) ~[org.apache.spark.spark-sql_2.11-1.6.0.jar:1.6.0]
        at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2125) ~[org.apache.spark.spark-sql_2.11-1.6.0.jar:1.6.0]
        at org.apache.spark.sql.DataFrame.foreachPartition(DataFrame.scala:1482) ~[org.apache.spark.spark-sql_2.11-1.6.0.jar:1.6.0]
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:247) ~[org.apache.spark.spark-sql_2.11-1.6.0.jar:1.6.0]
        at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:306) ~[org.apache.spark.spark-sql_2.11-1.6.0.jar:1.6.0]
        at writer.SQLWriter$.writeDf(Writer.scala:75) ~[temple.temple-1.0-sans-externalized.jar:na]
        at writer.Writer$.writeDf(Writer.scala:33) ~[temple.temple-1.0-sans-externalized.jar:na]
        at controllers.Api$$anonfun$downloadTable$1$$anonfun$apply$25.apply(Api.scala:460) ~[temple.temple-1.0-sans-externalized.jar:2.4.6]
        at controllers.Api$$anonfun$downloadTable$1$$anonfun$apply$25.apply(Api.scala:452) ~[temple.temple-1.0-sans-externalized.jar:2.4.6]
        at scala.util.Success$$anonfun$map$1.apply(Try.scala:237) ~[org.scala-lang.scala-library-2.11.7.jar:na]

這是一個火花蟲還是我做錯了什么/任何變通辦法?

在嘗試了幾件事之后,我發現全局ForkJoinPool創建的一個線程將其spark.sql.execution.id屬性設置為隨機值。 我無法確定實際執行此操作的過程,但我可以使用自己的ExecutionContext來解決它。

import java.util.concurrent.Executors
import concurrent.ExecutionContext
val executorService = Executors.newFixedThreadPool(4)
implicit val ec = ExecutionContext.fromExecutorService(executorService)

我使用了http://danielwestheide.com/blog/2013/01/16/the-neophytes-guide-to-scala-part-9-promises-and-futures-in-practice.html中的代碼。 也許ForkJoinPool在創建新的線程時克隆線程屬性,如果在SQL執行的上下文中發生這種情況,它將獲得非null值,而FixedThreadPool將在實例化時創建線程。

請檢查SPARK-13​​747

如果適用於您的環境,請考慮使用Spark 2.2.0或更高版本。

測試1:如果以串行方式而不是並行方式運行每個df.write操作,它會有幫助嗎?

測試2:如果你持久保存數據幀然后並行執行所有df.write操作並在完成所有操作后seralize to unpersist以查看是否有幫助,它會有幫助嗎?

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM