簡體   English   中英

Apache spark 和 scala,執行查詢時出錯

[英]Apache spark and scala, error while executing queries

我正在使用一個數據集,其樣本如下:

"age";"job";"marital";"education";"default";"balance";"housing";"loan";"contact";"day";"month";"duration";"campaign";"pdays";"previous";"poutcome";"y"
58;"management";"married";"tertiary";"no";2143;"yes";"no";"unknown";5;"may";261;1;-1;0;"unknown";"no"
44;"technician";"single";"secondary";"no";29;"yes";"no";"unknown";5;"may";151;1;-1;0;"unknown";"no"

我已成功執行以下命令:

import org.apache.spark.sql._
import org.apache.spark.sql.types._
import spark.sqlContext.implicits._
val data = sc.textFile(“file:///C:/Users/Desktop/bank-full-Copy.csv")
data.map(x => x.split(";(?=([^\"]*\"[^\"]*\")*[^\"]*$)",-1))
val header = data.first()
val filtered = data.filter(x => x(0)!= header(0))
val rdds = filtered.map(x => Row(x(0).toInt,
x(1),
x(2),
x(3),
x(4),
x(5).toInt,
x(6),
x(7),
x(8),
x(9).toInt,
x(10),
x(11).toInt,
x(12).toInt,
x(13).toInt,
x(14).toInt,
x(15),
x(16) ))
val schema = StructType( List(StructField("age", IntegerType, true),
StructField("job", StringType, true) ,
StructField("marital", StringType, true),
StructField("education", StringType, true) ,
StructField("default", StringType, true),
StructField("balance", IntegerType, true) ,
StructField("housing", StringType, true) ,
StructField("loan", StringType, true) ,
StructField("contact", StringType, true) ,
StructField("day", IntegerType, true) ,
StructField("month", StringType, true) ,
StructField("duration", IntegerType, true) ,
StructField("campaign", IntegerType, true) ,
StructField("pdays", IntegerType, true) ,
StructField("previous", IntegerType, true) ,
StructField("poutcome", StringType, true) ,
StructField("y", StringType, true)) )
val df = spark.sqlContext.createDataFrame(rdds, schema)

我收到以下錯誤:

df.groupBy("age","y").count.show()*,

java.lang.RuntimeException:編碼時出錯:java.lang.RuntimeException:java.lang.Character 不是字符串的有效外部類型

在對數據執行任何查詢時,我遇到了同樣的錯誤。 你能看看並為我提供解決方案嗎?

如果您想跳過 RDD 額外代碼,可以使用以下代碼

輸入文件 csv ( ;分隔,每條記錄由下一行分隔)

"age";"job";"marital";"education";"default";"balance";"housing";"loan";"contact";"day";"month";"duration";"campaign";"pdays";"previous";"poutcome";"y"
58;"management";"married";"tertiary";"no";2143;"yes";"no";"unknown";5;"may";261;1;-1;0;"unknown";"no"
44;"technician";"single";"secondary";"no";29;"yes";"no";"unknown";5;"may";151;1;-1;0;"unknown";"no"
  • 定義結構模式
  • 閱讀; 分隔文件
  • 直接讀取帶有 header=true 和預定義架構為 Dataframe 的 csv
import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType}

object ProcessSemiColonCsv {

  def main(args: Array[String]): Unit = {

    val spark = Constant.getSparkSess

    val schema = StructType( List(StructField("age", IntegerType, true),
      StructField("job", StringType, true) ,
      StructField("marital", StringType, true),
      StructField("education", StringType, true) ,
      StructField("default", StringType, true),
      StructField("balance", IntegerType, true) ,
      StructField("housing", StringType, true) ,
      StructField("loan", StringType, true) ,
      StructField("contact", StringType, true) ,
      StructField("day", IntegerType, true) ,
      StructField("month", StringType, true) ,
      StructField("duration", IntegerType, true) ,
      StructField("campaign", IntegerType, true) ,
      StructField("pdays", IntegerType, true) ,
      StructField("previous", IntegerType, true) ,
      StructField("poutcome", StringType, true) ,
      StructField("y", StringType, true)) )

    val df = spark.read
      .option("delimiter", ";")
      .option("header", "true")
      .schema(schema)
      .csv("src/main/resources/SemiColon.csv")

    df.show()
    df.printSchema()
  }

}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM