[英]python+pyspark: error on inner join with multiple column comparison in pyspark
[英]Pyspark awaitResult error in dataframe inner join
在docker容器中运行独立的spark-2.3.0-bin-hadoop2.7
数据集很小。
df1 schema: Dataframe[id:bigint, name:string] df2 schema: Dataframe[id:decimal(12,0), age: int]
内部联接
df3 = df1.join(df2, df1.id == df2.id, 'inner')
df3 schema: Dataframe[id:bigint, name:string, age: int]
执行df3.show(5)
,发生以下错误
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/apache/spark-2.3.0-bin-hadoop2.7/python/pyspark/sql/dataframe.py", line 466, in collect
port = self._jdf.collectToPython() File "/usr/local/lib/python3.6/dist-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name) File "/usr/apache/spark-2.3.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw) File "/usr/local/lib/python3.6/dist-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o43.collectToPython. : org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
尝试按照此建议将广播超时设置为-1,但出现相同的错误
conf = SparkConf().set("spark.sql.broadcastTimeout","-1")
我在Spark 2.3中使用了不兼容的JRE版本。
在Docker Image中使用openjdk-8-jre更新JRE后解决了错误
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.