簡體   English   中英

嘗試使用spark-cassandra-connector將Cassandra行映射到案例對象時,無法找到此類型錯誤的RowReaderFactory

[英]No RowReaderFactory can be found for this type error when trying to map Cassandra row to case object using spark-cassandra-connector

我試圖得到一個簡單的例子,使用Apache Spark 1.1.1,Cassandra 2.0.11和spark-cassandra-connector(v1.1.0)將行從Cassandra映射到scala案例類。 我已經在spark-cassandra-connector github頁面,planetcassandra.org,datastax上查看了文檔,並且一般都在搜索; 但沒有發現其他人遇到這個問題。 所以這里......

使用sbt(0.13.5),scala 2.10.4,針對Cassandra 2.0.11的spark 1.1.1構建一個微小的spark應用程序。 從spark-cassandra-connector文檔建模示例,以下兩行在我的IDE中出現錯誤並且無法編譯。

case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray

eclipse提出的簡單錯誤是:

No RowReaderFactory can be found for this type

編譯錯誤只是稍微冗長:

> compile
[info] Compiling 1 Scala source to /home/bkarels/dev/simple-case/target/scala-2.10/classes...
[error] /home/bkarels/dev/simple-case/src/main/scala/com/bradkarels/simple/SimpleApp.scala:82: No RowReaderFactory can be found for this type
[error]     val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
[error]                                          ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 1 s, completed Dec 10, 2014 9:01:30 AM
>

Scala來源:

package com.bradkarels.simple

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._
// Likely don't need this import - but throwing darts hits the bullseye once in a while...
import com.datastax.spark.connector.rdd.reader.RowReaderFactory

object CaseStudy {

  def main(args: Array[String]) {
    val conf = new SparkConf(true)
      .set("spark.cassandra.connection.host", "127.0.0.1")

    val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)

    case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
    val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
  }
}

刪除麻煩的線條,一切都編譯好,裝配工作,我可以正常執行其他Spark操作。 例如,如果我刪除問題行並插入:

val rdd:CassandraRDD[CassandraRow] = sc.cassandraTable("nicecase", "human")

我收回了RDD並按預期使用它。 也就是說,我懷疑我的sbt項目,匯編插件等對這些問題沒有貢獻。 工作源(減去新的嘗試來映射到的情況下的等級如預期的連接器)可以在github中找到此處

但是,為了更徹底,我的build.sbt:

name := "Simple Case"

version := "0.0.1"

scalaVersion := "2.10.4"

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "1.1.1",
    "org.apache.spark" %% "spark-sql" % "1.1.1",
    "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0" withSources() withJavadoc()
  )

所以問題是我錯過了什么? 希望這是愚蠢的,但如果你遇到這個並且可以幫助我解決這個令人費解的小問題,我將非常感激。 如果有任何其他有助於排除故障的詳細信息,請與我們聯系。

謝謝。

這可能是我對Scala的新見解,但我通過將case類聲明移出main方法解決了這個問題。 所以簡化的源現在看起來像這樣:

package com.bradkarels.simple

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._

object CaseStudy {

  case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)

  def main(args: Array[String]) {
    val conf = new SparkConf(true)
      .set("spark.cassandra.connection.host", "127.0.0.1")

    val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)

    val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
  }
}

完整的源代碼(更新和修復)可以在github上找到https://github.com/bradkarels/spark-cassandra-to-scala-case-class

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM