[英]Scala compilation error on nested pattern matching
我正在使用Scala 2.10.5和Spark 1.6.0。
我的代碼是:
def computeContribs(tuples : RDD[(Int,List[Int])], ranks : RDD[(Int, Double)] ) : RDD[(Int, Double)] = {
val x = tuples.map{case(numpage,list)=>(numpage,(list,1.0/list.size))}
val joined = x.join(ranks)
val contribs = joined.flatMap{case(numPage,(pageList,size), rank) => pageList.map( y:Int=>(y,size) )}
contribs.reduceByKey(_+_)
}
我得到的編譯錯誤是:
[error] C:\Users\vital\Documents\spazi_lavoro\spark-examples\src\main\scala\pagerank\PageRankSpark.scala:11: constructor cannot be instantiated to expected type;
[error] found : (T1, T2, T3)
[error] required: (Int, ((List[Int], Double), Double))
[error] val contribs = joined.flatMap{case(numPage,(pageList,size), rank) => pageList.map( y:Int=>(y,size) )}
[error]
^
如何使我的代碼編譯?
您joined
RDD
的類型為RDD[(Int, ((List[Int], Double), Double))]
因此在將flatmap
應用於此RDD
您需要使用相同類型的Case Class
,即(numPage,((pageList,size), rank))
not (numPage,(pageList,size), rank)
。 另外,您無需在map
函數中指定類型( Int
)。 您可以將貢獻RDD更改為以下內容。
val contribs: RDD[(Int, Double)] =
joined.flatMap{case (numPage,((pageList,size), rank)) =>
pageList.map(y => (y , size) )}
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.