简体   繁体   English

Scala / Spark的惯用方式来处理数据集中的空值?

[英]Scala/Spark idiomatic way to handle nulls in the DataSet?

The following code read data from a database table and return DataSet[Cols] . 以下代码从数据库表读取数据并返回DataSet[Cols]

case class Cols (F1: String, F2: BigDecimal, F3: Int, F4: Date, ...)

def readTable() : DataSet[Cols] = {
    import sqlContext.sparkSession.implicits._

    sqlContext.read.format("jdbc").options(Map(
      "driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver",
      "url" -> jdbcSqlConn,
      "dbtable" -> s"..."
    )).load()
      .select("F1", "F2", "F3", "F4")
      .as[Cols]
  }

The values may be nulls. 该值可以为空。 Later it raised runtime exception when using these fields. 后来,当使用这些字段时,它引发了运行时异常。

val r = readTable.filter(x => (if (x.F3 > ...

What's the Scala idiomatic way to handle nulls in the DataSet? Scala惯用的处理数据集中null的方式是什么?

I got the error when running the code. 运行代码时出现错误。

java.lang.NullPointerException
        at scala.math.BigDecimal.$minus(BigDecimal.scala:563)
        at MappingPoint$$anonfun$compare$1.apply(Mapping.scala:51)

Options are the idiomatic way 选项是惯用的方式

case class Cols (F1: Option[String], F2: Option[BigDecimal], F3: Option[Int], F4: Option[Date], ...)

There is a performance hit as discussed in the databricks style guide databricks样式指南中所述,对性能造成了影响

Option(null) will return None . Option(null)将返回None

Thus, for instance: 因此,例如:

val r = readTable.filter(x => (if (Option(x.F3).getOrElse(0) >

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM