簡體   English   中英

使用Spark Dataset API讀取多行JSON

[英]Reading multiline json using Spark Dataset API

我想使用join()方法對兩個數據集執行連接。 但是我不明白如何指定條件或聯接列名稱。

public static void main(String[] args) {
        SparkSession spark = SparkSession
                  .builder()
                  .appName("Java Spark SQL basic example")
                  .master("spark://10.127.153.198:7077")
                  .getOrCreate();

        List<String> list = Arrays.asList("partyId");

        Dataset<Row> df = spark.read().text("C:\\Users\\phyadavi\\LearningAndDevelopment\\Spark-Demo\\data1\\alert.json");
        Dataset<Row> df2 = spark.read().text("C:\\Users\\phyadavi\\LearningAndDevelopment\\Spark-Demo\\data1\\contract.json");

        df.join(df2,JavaConversions.asScalaBuffer(list)).show();


//      df.join(df2, "partyId").show();

    }

當我執行以上代碼時,出現此錯誤

Exception in thread "main" org.apache.spark.sql.AnalysisException: USING column `partyId` cannot be resolved on the left side of the join. The left-side columns: [value];
    at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$90$$anonfun$apply$56.apply(Analyzer.scala:1977)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$90$$anonfun$apply$56.apply(Analyzer.scala:1977)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$90.apply(Analyzer.scala:1976)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$90.apply(Analyzer.scala:1975)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$commonNaturalJoinProcessing(Analyzer.scala:1975)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveNaturalAndUsingJoin$$anonfun$apply$31.applyOrElse(Analyzer.scala:1961)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveNaturalAndUsingJoin$$anonfun$apply$31.applyOrElse(Analyzer.scala:1958)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveNaturalAndUsingJoin$.apply(Analyzer.scala:1958)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveNaturalAndUsingJoin$.apply(Analyzer.scala:1957)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
    at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
    at scala.collection.immutable.List.foldLeft(List.scala:84)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:64)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:62)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:2822)
    at org.apache.spark.sql.Dataset.join(Dataset.scala:775)
    at org.apache.spark.sql.Dataset.join(Dataset.scala:748)
    at com.cisco.cdx.batch.JsonDataReader.main(JsonDataReader.java:27)

這兩個JSON都有“ partyId”列。 請幫忙。

數據:

這兩個JSON都有“ partyId”列。 但是,當我同時加入兩個數據集時,spark無法找到該列。 我在這里想念什么嗎?

Alerts.json

{
    "sourcePartyId": "SmartAccount_700001",
    "sourceSubPartyId": "",
    "partyId": "700001",
    "managedndjn": "BIZ_KEY_999001",
    "neAlert": {
        "data1": [{
            "sni": "c1f44bb6-e429-11e7-9afc-64609ee945d1",
                }],
        "daa2": [{
            "sni": "c1f44bb6-e429-11e7-9afc-64609ee945d1",
        }],
        "data3": [{
            "sni": "c1f44bb6-e429-11e7-9afc-64609ee945d1",
            "ndjn": "999001",
        }],
        "advisory": [{
            "sni": "c1f44bb6-e429-11e7-9afc-64609ee945d1",
            "ndjn": "999001",
        }]
    }
}

Contracts.json

{

  "sourceSubPartyId": "",
  "partyId": "700001",
  "neContract": {
    "serialNumber": "FCH2013V245",
    "productId": "FS4000-K9",
    "coverageInfo": [
      {
        "billToCity": "Delhi",
        "billToCountry": "India",
        "billToPostalCode": "260001",
        "billToProvince": "",
        "slaCode": "1234",
      }
    ]
  }
}

但是,當我閱讀下面的方法時,我能夠打印數據。

JavaRDD<Tuple2<String, String>> javaRDD = spark.sparkContext().wholeTextFiles("C:\\\\Users\\\\phyadavi\\\\LearningAndDevelopment\\\\Spark-Demo\\\\data1\\\\alert.json", 1).toJavaRDD();
List<Tuple2<String, String>> collect = javaRDD.collect();
        collect.forEach(x -> {
            System.out.println(x._1);
            System.out.println(x._2);
        });

問題是您嘗試使用spark.read().text()讀取為文本文件

如果您想直接將json文件讀取到數據幀,則需要使用

spark.read().json()

如果數據是多行的,那么您需要添加以下選項:

spark.read.option("multiline", "true").json()

這就是為什么你不能夠訪問在列join

另一種方法是讀取為文本文件並將其轉換為JSON

val jsonRDD = sc.wholeTextFiles("path to json").map(x => x._2)

spark.sqlContext.read.json(jsonRDD)
    .show(false)

使JSON單行顯示后,該問題得以解決。 因此,我想發表我的答案。

public class JsonDataReader {

    public static void main(String[] args) {
        SparkSession spark = SparkSession.builder().appName("Java Spark SQL basic example")
                .master("spark://192.168.0.2:7077").getOrCreate();

//      JavaRDD<Tuple2<String, String>> javaRDD = spark.sparkContext().wholeTextFiles("C:\\\\Users\\\\phyadavi\\\\LearningAndDevelopment\\\\Spark-Demo\\\\data1\\\\alert.json", 1).toJavaRDD();

        Seq<String> joinColumns = scala.collection.JavaConversions
                  .asScalaBuffer(Arrays.asList("partyId","sourcePartyId", "sourceSubPartyId", "wfid", "generatedAt", "collectorId"));

        Dataset<Row> df = spark.read().option("multiLine",true).option("mode", "PERMISSIVE")
                .json("C:\\Users\\phyadavi\\LearningAndDevelopment\\Spark-Demo\\data1\\alert.json");
        Dataset<Row> df2 = spark.read().option("multiLine", true).option("mode", "PERMISSIVE")
                .json("C:\\Users\\phyadavi\\LearningAndDevelopment\\Spark-Demo\\data1\\contract.json");

        Dataset<Row> finalDS = df.join(df2, joinColumns,"inner");
        finalDS.write().mode(SaveMode.Overwrite).json("C:\\Users\\phyadavi\\LearningAndDevelopment\\Spark-Demo\\data1\\final.json");

//      List<Tuple2<String, String>> collect = javaRDD.collect();
//      collect.forEach(x -> {
//          System.out.println(x._1);
//          System.out.println(x._2);
//      });

    }

}

但是,@ ShankarKoiralas的答案更為精確,並且為我工作。 因此,接受了答案。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM