繁体   English   中英

为什么在连接之后选择会引发java spark数据帧中的异常?

[英]Why select after a join raises an exception in java spark dataframe?

我有两个数据数据帧:左和右。 它们是由三列组成的: src relation, dest和具有相同的值。

1-我试图加入这些数据帧,条件是左边的dst =右边的src。 但它没有用。 错误在哪里?

Dataset<Row> r = left
  .join(right, left.col("dst").equalTo(right.col("src")));

结果:

+---+---------+---+---+---------+---+
|src|predicate|dst|src|predicate|dst|
+---+---------+---+---+---------+---+
+---+---------+---+---+---------+---+

2-如果我将dst在左边重命名为dst,右边的src列重命名为dst2,那么我应用一个连接,它可以工作。 但是如果我尝试从选择的数据帧中选择一些列。 它提出了一个例外。 我的错误在哪里?

 Dataset<Row> left = input_df.withColumnRenamed("dst", "dst2");
 Dataset<Row> right = input_df.withColumnRenamed("src", "dst2");  
 Dataset<Row> r = left.join(right, left.col("dst2").equalTo(right.col("dst2")));

然后:

left.show();

得到:

+---+---------+----+
|src|predicate|dst2|
+---+---------+----+
|  a|       r1| :b1|
|  a|       r2|   k|
|:b1|       r3| :b4|
|:b1|      r10|   d|
|:b4|       r4|   f|
|:b4|       r5| :b5|
|:b5|       r9|   t|
|:b5|      r10|   e|
+---+---------+----+

right.show();

得到:

+----+---------+---+
|dst2|predicate|dst|
+----+---------+---+
|   a|       r1|:b1|
|   a|       r2|  k|
| :b1|       r3|:b4|
| :b1|      r10|  d|
| :b4|       r4|  f|
| :b4|       r5|:b5|
| :b5|       r9|  t|
| :b5|      r10|  e|
+----+---------+---+

结果:

+---+---------+----+----+---------+---+
|src|predicate|dst2|dst2|predicate|dst|
+---+---------+----+----+---------+---+
|  a|       r1| b1| b1  |      r10|  d|
|  a|       r1| b1| b1  |       r3| b4|
|b1 |       r3| b4| b4  |       r5| b5|
|b1 |       r3| b4| b4  |       r4|  f|
+---+---------+----+----+---------+---+


Dataset<Row> r = left
  .join(right, left.col("dst2").equalTo(right.col("dst2")))
  .select(left.col("src"),right.col("dst"));

结果:

Exception in thread "main" org.apache.spark.sql.AnalysisException: resolved attribute(s) dst#45 missing from dst2#177,src#43,predicate#197,predicate#44,dst2#182,dst#198 in operator !Project [src#43, dst#45];

3-假设所选择的工作,如何将获得的数据帧添加到左数据帧。

我在Java工作。

你在用:

r = r.select(left.col("src"), right.col("dst"));

似乎Spark没有找到回归到right数据帧的血统。 它经历了很多优化,并不令人震惊。

假设您想要的输出是:

+---+---+
|src|dst|
+---+---+
| b1|:b5|
| b1|  f|
|:b4|  e|
|:b4|  t|
+---+---+

您可以使用以下3个选项之一:

使用col()方法

Dataset<Row> resultOption1Df = r.select(left.col("src"), r.col("dst"));
resultOption1Df.show();

使用col()静态函数

Dataset<Row> resultOption2Df = r.select(col("src"), col("dst"));
resultOption2Df.show();

使用列名称

Dataset<Row> resultOption3Df = r.select("src", "dst");
resultOption3Df.show();

这是完整的源代码:

package net.jgp.books.spark.ch12.lab990_others;

import static org.apache.spark.sql.functions.col;

import java.util.ArrayList;
import java.util.List;

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;

/**
 * Self join.
 * 
 * @author jgp
 */
public class SelfJoinAndSelectApp {

  /**
   * main() is your entry point to the application.
   * 
   * @param args
   */
  public static void main(String[] args) {
    SelfJoinAndSelectApp app = new SelfJoinAndSelectApp();
    app.start();
  }

  /**
   * The processing code.
   */
  private void start() {
    // Creates a session on a local master
    SparkSession spark = SparkSession.builder()
        .appName("Self join")
        .master("local[*]")
        .getOrCreate();

    Dataset<Row> inputDf = createDataframe(spark);
    inputDf.show(false);

    Dataset<Row> left = inputDf.withColumnRenamed("dst", "dst2");
    left.show();

    Dataset<Row> right = inputDf.withColumnRenamed("src", "dst2");
    right.show();

    Dataset<Row> r = left.join(
        right,
        left.col("dst2").equalTo(right.col("dst2")));
    r.show();

    Dataset<Row> resultOption1Df = r.select(left.col("src"), r.col("dst"));
    resultOption1Df.show();

    Dataset<Row> resultOption2Df = r.select(col("src"), col("dst"));
    resultOption2Df.show();

    Dataset<Row> resultOption3Df = r.select("src", "dst");
    resultOption3Df.show();
  }

  private static Dataset<Row> createDataframe(SparkSession spark) {
    StructType schema = DataTypes.createStructType(new StructField[] {
        DataTypes.createStructField(
            "src",
            DataTypes.StringType,
            false),
        DataTypes.createStructField(
            "predicate",
            DataTypes.StringType,
            false),
        DataTypes.createStructField(
            "dst",
            DataTypes.StringType,
            false) });

    List<Row> rows = new ArrayList<>();
    rows.add(RowFactory.create("a", "r1", ":b1"));
    rows.add(RowFactory.create("a", "r2", "k"));
    rows.add(RowFactory.create("b1", "r3", ":b4"));
    rows.add(RowFactory.create("b1", "r10", "d"));
    rows.add(RowFactory.create(":b4", "r4", "f"));
    rows.add(RowFactory.create(":b4", "r5", ":b5"));
    rows.add(RowFactory.create(":b5", "r9", "t"));
    rows.add(RowFactory.create(":b5", "r10", "e"));

    return spark.createDataFrame(rows, schema);
  }
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM