簡體   English   中英

Apache Spark DataFrames連接使用Scala失敗

[英]Apache spark DataFrames join is failing using scala

我有以下DataFrame和它們之間的聯接操作,但是聯接失敗而未引用任何實際錯誤。

//HospitalFacility class to fill in
case class HospitalFacility(Name: String, Rating: Int, Cost: Int);
//I pass the pid as an input parameter.
//hc : HiveConext successfully created
//Provider_Facility & Facility_Master are my two hive tables.
def fetchHospitalFacilityData(pid: String): String = {
   val filteredProviderSpecilaityDF = hc.sql("select FacilityId, Rating, Cost from Provider_Facility where ProviderId='" + pid + "'");
   println(filteredProviderSpecilaityDF);
   filteredProviderSpecilaityDF.foreach ( println ); //Prints perfectly

   val allFacilityDF = hc.sql("select id, Name from Facility_Master");
   println(allFacilityDF);
   allFacilityDF.foreach(println); //Prints perfectly

   //The below line throws error.
   val resultDF = filteredProviderSpecilaityDF.join(allFacilityDF,filteredProviderSpecilaityDF("FacilityId") === allFacilityDF("id") ,"right_outer"); 
   println(resultDF);

   val filteredFacilityList = resultDF.rdd.map { spec => HospitalFacility(spec.getString(0).toString(), spec.getInt(3), spec.getInt(4)) }.collect();
   filteredFacilityList.foreach(println); //does not reach this point
   return result;
  }

下面列出了引發的錯誤:

Exception in thread "broadcast-hash-join-0" java.lang.NoSuchMethodError: org.apache.spark.util.Utils$.tryOrIOException(Lscala/Function0;)V
    at org.apache.spark.sql.execution.joins.UnsafeHashedRelation.writeExternal(HashedRelation.scala:264)
    at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1458)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1429)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:203)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:102)
    at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:85)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)
    at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin$$anonfun$broadcastFuture$1$$anonfun$apply$1.apply(BroadcastHashOuterJoin.scala:94)
    at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin$$anonfun$broadcastFuture$1$$anonfun$apply$1.apply(BroadcastHashOuterJoin.scala:82)
    at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:100)
    at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin$$anonfun$broadcastFuture$1.apply(BroadcastHashOuterJoin.scala:82)
    at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin$$anonfun$broadcastFuture$1.apply(BroadcastHashOuterJoin.scala:82)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

誰能幫幫我嗎。

也許allFacilityDF("id")=== filteredProviderSpecilaityDF("FacilityId")返回一個布爾seq而不是Seq [String] usingColumns的參數定義如下:要連接的列的名稱。 此列必須同時存在。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM