简体   繁体   English

Scala 枚举 - Java.lang.UnsupportedOperationException

[英]Scala enum - Java.lang.UnsupportedOperationException

I am getting我正进入(状态

java.lang.UnsupportedOperationException: Schema for type Range.Value is not supported. java.lang.UnsupportedOperationException:不支持 Range.Value 类型的架构。

Appreciate any pointers on this感谢对此的任何指示

object Range extends Enumeration {
  type Range = Value

  val RangeMedium = Value("Range Medium")
  val RangeHigh = Value("Range Higher")
  val RangeNotEnough = Value("Range Not enough")
  val NotApplicable = Value("Not Applicable")

}



val getRange = udf((p1: Double, p2: Double) => {
    if (p1 >= 5 && p1 < 10 && p2 >= 1) {
      Some(Range.RangeMedium)
    }
    else if (p1 >= 10 && p2 >= 1) {
      Some(Range.RangeHigh)
    }
    else {
      Some(Range.NotApplicable)
    }
  })

ds = Seq(9,10).toDF("p1","p2")

ds.withColumn("level",getRange($"p1",$"p2")).show()

If you're returning a string from the UDF, you can try casting the enum value to a string using .toString :如果要从 UDF 返回字符串,可以尝试使用.toString将枚举值转换为字符串:

object Range extends Enumeration {
  type Range = Value

  val RangeMedium = Value("Range Medium")
  val RangeHigh = Value("Range Higher")
  val RangeNotEnough = Value("Range Not enough")
  val NotApplicable = Value("Not Applicable")
}

val getRange = udf((p1: Double, p2: Double) => {
    if (p1 >= 5 && p1 < 10 && p2 >= 1) {
      Range.RangeMedium.toString
    }
    else if (p1 >= 10 && p2 >= 1) {
      Range.RangeHigh.toString
    }
    else {
      Range.NotApplicable.toString
    }
})

val ds = Seq((9,10)).toDF("p1","p2")

ds.withColumn("level",getRange($"p1",$"p2")).show()
+---+---+------------+
| p1| p2|       level|
+---+---+------------+
|  9| 10|Range Medium|
+---+---+------------+

Having said that, this kind of operation is also possible using when statements instead of a UDF, which should be more performant.话虽如此,这种操作也可以使用when语句而不是 UDF,它应该更高效。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Scala Spark udf java.lang.UnsupportedOperationException - Scala Spark udf java.lang.UnsupportedOperationException Scala Spark-java.lang.UnsupportedOperationException:空 - Scala Spark - java.lang.UnsupportedOperationException: empty.init Scala Map和ConcurrentHashMap抛出java.lang.UnsupportedOperationException - Scala Map and ConcurrentHashMap throw a java.lang.UnsupportedOperationException Scala:java.lang.UnsupportedOperationException:不支持原始类型 - Scala: java.lang.UnsupportedOperationException: Primitive types are not supported Scala java.lang.UnsupportedOperationException:不支持类型 A 的架构 - Scala java.lang.UnsupportedOperationException: Schema for type A is not supported Spark java.lang.UnsupportedOperationException:空集合 - Spark java.lang.UnsupportedOperationException: empty collection “ java.lang.UnsupportedOperationException:空集合” - “java.lang.UnsupportedOperationException: empty collection” Spark Scala UDF:java.lang.UnsupportedOperationException:不支持任何类型的架构 - Spark Scala UDF : java.lang.UnsupportedOperationException: Schema for type Any is not supported Apache Spark 2.1:java.lang.UnsupportedOperationException:找不到scala.collection.immutable.Set [String]的编码器 - Apache Spark 2.1 : java.lang.UnsupportedOperationException: No Encoder found for scala.collection.immutable.Set[String] Flink:java.lang.UnsupportedOperationException:无法覆盖 KeyedStream 的分区 - Flink: java.lang.UnsupportedOperationException: Cannot override partitioning for KeyedStream
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM