簡體   English   中英

將 cassandra 行映射到 Spark RDD 中的參數化類型

[英]Mapping cassandra row to parametrized type in Spark RDD

我正在嘗試使用spark-cassandra-connector將 map 一個 cassandra 行轉換為參數化類型。 我一直在嘗試使用隱式定義的 columnMapper 來定義映射,因此:

class Foo[T<:Bar:ClassTag:RowReaderFactory] {
  implicit object Mapper extends JavaBeanColumnMapper[T](
    Map("id" -> "id",
        "timestamp" -> "ts"))

  def doSomeStuff(operations: CassandraTableScanRDD[T]): Unit = {
    println("do some stuff here")
  }
}

但是,我遇到了以下錯誤,我認為這是因為我傳入了RowReaderFactory並且沒有正確指定RowReaderFactory的映射。 知道如何為RowReaderFactory指定映射信息嗎?

Exception in thread "main" java.lang.IllegalArgumentException: Failed to map constructor parameter timestamp in Bar to a column of MyNamespace
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78)
    at scala.Option.getOrElse(Option.scala:120)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:78)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:76)
    at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper.columnMapForReading(DefaultColumnMapper.scala:76)
    at com.datastax.spark.connector.rdd.reader.GettableDataToMappedTypeConverter.<init>(GettableDataToMappedTypeConverter.scala:56)
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader.<init>(ClassBasedRowReader.scala:23)
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:48)
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:43)
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.rowReader(CassandraTableRowReaderProvider.scala:48)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader$lzycompute(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:147)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143)

事實證明,必須在創建Foo實例的 scope 中創建columnMapper ,而不是在Foo本身中創建。

您可以在 Foo 的配套 object 中定義隱式,如下所示:

object Foo {
  implicit object Mapper extends JavaBeanColumnMapper[T](
    Map("id" -> "id",
        "timestamp" -> "ts"))
}

Scala 將在 class 的伴生 object 中查找 ZA2F2ED4F8EBC2CBB14C21A29DC4AB61DZ 的隱式實例。 如果需要,您可以在需要隱式的 scope 中定義它,但您可能希望添加配套的 object,因此您無需在必要時重復它。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM