简体   繁体   English

在 Spark Dataset mapGroups 操作后,Value Type 是二进制的,甚至在函数中返回一个 String

[英]Value Type is binary after Spark Dataset mapGroups operation even return a String in the function

Environment:环境:

Spark version: 2.3.0
Run Mode: Local
Java version: Java 8

The spark application trys to do the following spark应用程序尝试执行以下操作

1) Convert input data into a Dataset[GenericRecord] 1) 将输入数据转换成Dataset[GenericRecord]

2) Group by the key propery of the GenericRecord 2)按GenericRecord的key属性分组

3) Using mapGroups after group to iterate the value list and get some result in String format 3) 使用 mapGroups after group 迭代值列表并得到一些字符串格式的结果

4) Output the result as String in text file. 4) 将结果输出为文本文件中的字符串。

The error happens when writing to text file.写入文本文件时发生错误。 Spark deduced that the Dataset generated in step 3 has a binary column, not a String column. Spark 推断出步骤 3 中生成的 Dataset 具有二进制列,而不是 String 列。 But actually it returns a String in the mapGroups function.但实际上它在 mapGroups 函数中返回一个字符串。

Is there a way to do the column data type convertion or let Spark knows that it is actually a string column not binary?有没有办法进行列数据类型转换或让 Spark 知道它实际上是一个字符串列而不是二进制?


    val dslSourcePath = args(0)
    val filePath = args(1)
    val targetPath = args(2)
    val df = spark.read.textFile(filePath)

    implicit def kryoEncoder[A](implicit ct: ClassTag[A]): Encoder[A] = Encoders.kryo[A](ct)

    val mapResult = df.flatMap(abc => {
      JavaConversions.asScalaBuffer(some how return a list of Avro GenericRecord using a java library).seq;
    })

    val groupResult = mapResult.groupByKey(result => String.valueOf(result.get("key")))
      .mapGroups((key, valueList) => {
        val result = StringBuilder.newBuilder.append(key).append(",").append(valueList.count(_=>true))
        result.toString()
      })

    groupResult.printSchema()

    groupResult.write.text(targetPath + "-result-" + System.currentTimeMillis())


And the output said it is a bin并且输出说它是一个 bin

root
 |-- value: binary (nullable = true)

Spark gives out an error that it can't write binary as text: Spark 给出一个错误,它不能将二进制写入为文本:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Text data source supports only a string column, but you have binary.;
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat.verifySchema(TextFileFormat.scala:55)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat.prepareWrite(TextFileFormat.scala:78)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:140)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
    at org.apache.spark.sql.DataFrameWriter.text(DataFrameWriter.scala:595)

As @user10938362 said, the reason is the following code will encode all data to bytes正如@user10938362 所说,原因是以下代码将所有数据编码为字节

implicit def kryoEncoder[A](implicit ct: ClassTag[A]): Encoder[A] = Encoders.kryo[A](ct)

Replacing it with the following code will just enable this encoding for GenericRecord用以下代码替换它只会为 GenericRecord 启用此编码

implicit def kryoEncoder: Encoder[GenericRecord] = Encoders.kryo

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM