簡體   English   中英

對象內部定義的Apache-Spark UDF引發“字符串沒有可用的TypeTag”

[英]Apache-Spark UDF defined inside object raises “No TypeTag available for String”

在交互式會話中與使用sbt進行編譯相比,在復制粘貼函數時會遇到不同的行為。

交互式會話的最小,完整和可驗證示例

$ sbt package 
[error] src/main/scala/xxyy.scala:6: No TypeTag available for String
[error]     val correctDiacritics = udf((s: scala.Predef.String) => {
[error]                                ^
[error] two errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 9 s, completed May 22, 2018 2:22:52 PM
$ cat src/main/scala/xxyy.scala 
package xxx.yyy
import org.apache.spark.sql.functions.udf
object DummyObject {
    val correctDiacritics = udf((s: scala.Predef.String) => {
            s.replaceAll("è","e")
            .replaceAll("é","e")
            .replaceAll("à","a")
            .replaceAll("ç","c")
            })
}

前述代碼無法編譯。 但是,在交互式會話中:

// During the `spark-shell` session.
// Entering paste mode (ctrl-D to finish)
import org.apache.spark.sql.functions.udf
object DummyObject {
val correctDiacritics = udf((s: scala.Predef.String) => {
    s.replaceAll("è","e")
    .replaceAll("é","e")
    .replaceAll("à","a")
    .replaceAll("ç","c")
})
}
// Exiting paste mode, now interpreting.
// import org.apache.spark.sql.functions.udf
// defined object DummyObject
// Proceeds sucessfully.

版本:

  • 我正在使用Scala 2.11

  • 我正在使用Spark 2.1.0

  • built.sbt

     name := "my_app" version := "0.0.1" scalaVersion := "2.11.12" resolvers ++= Seq( Resolver sonatypeRepo "public", Resolver typesafeRepo "releases" ) resolvers += "MavenRepository" at "https://mvnrepository.com/" libraryDependencies ++= Seq( // "org.apache.spark" %% "spark-core" % "2.1.0", // "org.apache.spark" %% "spark-sql" % "2.1.0", //"org.apache.spark" %% "spark-core_2.10" % "1.0.2", // "org.apache.spark" % "org.apache.spark" % "spark-sql_2.10" % "2.1.0", "org.apache.spark" % "spark-core_2.10" % "2.1.0", "org.apache.spark" % "spark-mllib_2.10" % "2.1.0" ) 

相關問題:

您的構建定義不正確:

  • 您使用Scala 2.11.12構建項目
  • 但是使用Scala 2.10構建Spark依賴項

由於Scala在主要版本之間不是二進制兼容的,因此會出現錯誤。

與其嵌入Scala版本,不如使用%%

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-sql" % "2.1.0",
  "org.apache.spark" %% "spark-core" % "2.1.0",
  "org.apache.spark" %% "spark-mllib" % "2.1.0"
)

否則,請確保使用正確的構建:

libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-sql_2.11" % "2.1.0",
  "org.apache.spark" % "spark-core_2.11" % "2.1.0",
  "org.apache.spark" % "spark-mllib_2.11" % "2.1.0"
)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM