简体   繁体   English

java.lang.Long和scala.Long

[英]java.lang.Long and scala.Long

I don't know what happened in my code... 我不知道我的代码中发生了什么...

Logs are here. 日志在这里。

[error] blahblah\SampleApp.scala:22:53: overloaded method value reduce with alternatives:
[error]   (func: org.apache.spark.api.java.function.ReduceFunction[java.lang.Long])java.lang.Long <and>
[error]   (func: (java.lang.Long, java.lang.Long) => java.lang.Long)java.lang.Long
[error]  cannot be applied to ((java.lang.Long, java.lang.Long) => scala.Long)
[error]     val sumHundred = sparkSession.range(start, end).reduce(_ + _)

When I ran this code in scala 2.11.12, spark 2.3.2 it works without any ERROR. 当我在scala 2.11.12,spark 2.3.2中运行此代码时它可以正常工作而不会出现任何错误。
And same code in scala 2.12.7, spark 2.4.0 it doesn't works - what? Scala 2.12.7中的相同代码, Spark 2.4.0不起作用-什么?

Anybody knows about this? 有人知道吗?

  private val (start, end) = (1, 101)

  def main(args: Array[String]): Unit = {
    val sumHundred = sparkSession.range(start, end).reduce(_ + _)
    logger.debug(f"Sum 1 to 100 = $sumHundred")
    close()
  }

There's a parent trait that builds sparkSession etc. 有一个父trait可以构建sparkSession等。

What I've tried: 我试过的

  1. Explicit declaration of type: 类型的显式声明:
    private val (start: Long, end: Long) = ...
  2. Similar things in reduce code. reduce代码中的类似事情。

What I know: Perfectly compatiable between scala.Long and java.lang.Long 我所知道的: scala.Longjava.lang.Long之间完全兼容

It has nothing to do with Spark version. 它与Spark版本无关。 It is due to diffrences in Scala implementation between 2.11. 这是由于2.11之间的Scala实现存在差异。 and 2.12. 和2.12。 You can see how code actual look like for line 您可以看到代码行的实际外观

val sumHundred = sparkSession.range(start, end).reduce(_ + _)

in Scala 2.11 (with scala.this.Predef.long2Long conversion) 在Scala 2.11中(使用scala.this.Predef.long2Long转换)

val sumHundred: Long = sparkSession.range(start.toLong, end.toLong).reduce(((x$2: Long, x$3: Long) => scala.this.Predef.long2Long(scala.this.Predef.Long2long(x$2).+(scala.this.Predef.Long2long(x$3)))));

and Scala 2.12 (implicit conversions are not applied) 和Scala 2.12(不应用隐式转换)

val <sumHundred: error>: <error> = sparkSession.range(start.toLong, end.toLong).<reduce: error>(((x$2: Long, x$3: Long) => x$2.$plus(x$3)));

Your code will compile if you add a flag scalacOptions += "-Xsource:2.11" . 如果添加标志scalacOptions += "-Xsource:2.11"则代码将编译。

This page has more info SAM conversion precedes implicits 此页面上有更多信息SAM转换先于隐式

PS. PS。 I would say the main source of amusement here is this SparkSession.range() method, which takes Scala Long parameters and returns Java Long value. 我想说的主要娱乐来源是此SparkSession.range()方法,该方法采用Scala Long参数并返回Java Long值。

  def range(start: Long, end: Long): Dataset[java.lang.Long] = {

I would say it would be more consistent to choose one of them. 我会说选择其中之一会更加一致。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM