简体   繁体   English

我收到错误消息说没有类型的隐式参数:编码器 []" in Spark

[英]I am getting error saying No implicit argument of type:Encoder[ ]" in Spark

I get an error saying No implicit argument of type:Encoder[Movies] can you please tell me where am I going wrong as I am new to spark.我收到一条错误消息,说没有类型的隐式参数:Encoder[Movies] 你能告诉我我哪里出错了,因为我是 Spark 的新手。

I am trying to read a movies file and converting it to a data set having 1 'ID' column and 2nd 'names of the movie' column.我正在尝试读取电影文件并将其转换为具有 1 个“ID”列和第二个“电影名称”列的数据集。

import org.apache.spark.sql.SparkSession

object Practice {
    def main(args: Array[String]): Unit = {
        val spark = SparkSession
          .builder()
          .appName("dataFrameExample")
          .master("local")
          .getOrCreate()

    **case class Movies( ID:String, name:String)**

        val ds1 = spark.read
         .format("text")
         .option("header", "true") //first line in file has headers
         .load("C:\\SparkScala\\SparkScalaStudy\\movies").as[Movies]

        ds1.printSchema()
    }
}

您需要将case class Movies移出main函数,并在ds1之前添加import spark.implicits._

You can import the sparksession.implicits to solve the problem or you can write your own implicits in an object as follows:您可以导入 sparksession.implicits 来解决问题,也可以按如下方式在对象中编写自己的隐式:

import org.apache.spark.sql.{Encoder, Encoders}

object CustomImplicits {    
  implicit val movieEncoder: Encoder[Movies] = Encoders.product[Movies]
}

Then simply import the implicit in your main method:然后只需在您的主要方法中导入隐式:

import package.containinig.implicits.CustomImplicits._
import org.apache.spark.sql.SparkSession
object Practice {
    def main(args: Array[String]): Unit = {
        val spark = SparkSession
          .builder()
          .appName("dataFrameExample")
          .master("local")
          .getOrCreate()

        val ds1 = spark.read
         .format("text")
         .option("header", "true") //first line in file has headers
         .load("C:\\SparkScala\\SparkScalaStudy\\movies").as[Movies]

        ds1.printSchema()
    }
}

Using the Encoders, you can enforce the Schema on your dataset as it would raise appropriate errors if the Schema is violated.使用编码器,您可以在数据集上强制执行架构,因为如果架构被违反,它会引发适当的错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在范围内找不到spark隐式编码器 - spark implicit encoder not found in scope 特征类型参数的隐式编码器 - Implicit encoder for a trait type parameter Spark 2.0 隐式编码器,当类型为 Option[Seq[String]] 时处理缺失的列(scala) - Spark 2.0 implicit encoder, deal with missing column when type is Option[Seq[String]] (scala) Spark错误:无法找到存储在数据集中的类型的编码器 - Spark Error: Unable to find encoder for type stored in a Dataset Scala中TypedDataset和类型边界的隐式编码器 - Implicit Encoder for TypedDataset and Type Bounds in Scala 为什么我的NullPointerException缺少隐式而不是Akka的编译错误? - Why am I getting NullPointerException with missing implicit instead of a compile error with Akka? 尝试对有序类的实例进行排序时,为什么会出现“发散的隐式展开”错误? - Why am I getting a “ diverging implicit expansion” error when trying to sort instances of an ordered class? Spark 2.0上的编译编码器错误 - Compilation Encoder error on spark 2.0 为什么在Scala中编写代码以运行Spark应用程序时出现错误提示? - Why I am getting below error while I am writing a code in scala to run a spark application? 为什么我得到“构造函数无法实例化为期望类型”错误? - Why am I getting the a “constructor cannot be instantiated to expected type” error?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM