簡體   English   中英

嵌套模式匹配時Scala編譯錯誤

[英]Scala compilation error on nested pattern matching

我正在使用Scala 2.10.5和Spark 1.6.0。

我的代碼是:

  def computeContribs(tuples : RDD[(Int,List[Int])], ranks : RDD[(Int, Double)] ) : RDD[(Int, Double)] = {
    val x = tuples.map{case(numpage,list)=>(numpage,(list,1.0/list.size))}
    val joined = x.join(ranks)
    val contribs = joined.flatMap{case(numPage,(pageList,size), rank) => pageList.map( y:Int=>(y,size) )}
    contribs.reduceByKey(_+_)
  }

我得到的編譯錯誤是:

[error] C:\Users\vital\Documents\spazi_lavoro\spark-examples\src\main\scala\pagerank\PageRankSpark.scala:11: constructor cannot be instantiated to expected type;
[error]  found   : (T1, T2, T3)
[error]  required: (Int, ((List[Int], Double), Double))
[error]     val contribs = joined.flatMap{case(numPage,(pageList,size), rank) => pageList.map( y:Int=>(y,size) )}
[error]

                                   ^

如何使我的代碼編譯?

joined RDD的類型為RDD[(Int, ((List[Int], Double), Double))]因此在將flatmap應用於此RDD您需要使用相同類型的Case Class ,即(numPage,((pageList,size), rank)) not (numPage,(pageList,size), rank) 另外,您無需在map函數中指定類型( Int )。 您可以將貢獻RDD更改為以下內容。

val contribs: RDD[(Int, Double)] = 
    joined.flatMap{case (numPage,((pageList,size), rank)) => 
    pageList.map(y => (y , size) )}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM