簡體   English   中英

SQL 查詢和 dataframe 使用 Spark /Java

[英]SQL Query and dataframe using Spark /Java

我是 spark 的初學者,我被困在如何使用 dataframe 發出 sql 請求。

我有以下兩個dataframe。

df_zones
+-----------------+-----------------+----------------------+---------------------+
|id               |geomType         |geom                  |rayon                |
+-----------------+-----------------+----------------------+---------------------+
|30               |Polygon          |[00 00 00 00 01 0...] |200                  |
|32               |Point            |[00 00 00 00 01 0.. ] |320179               |
+-----------------+-----------------+----------------------+---------------------+
df_tracking
+-----------------+-----------------+----------------------+
|idZones         |Longitude        |Latitude              |               
+-----------------+-----------------+----------------------+
|[30,50,100,]     | -7.6198783      |33.5942549            |
|[20,140,39,]     |-7.6198783       |33.5942549            |
+-----------------+-----------------+----------------------+

我想執行以下請求。

"SELECT zones.* FROM zones WHERE zones.id IN ("
                            + idZones
                            + ") AND ((zones.geomType='Polygon' AND (ST_WITHIN(ST_GeomFromText(CONCAT('POINT(',"
                            + longitude
                            + ",' ',"
                            + latitude
                            + ",')'),4326),zones.geom))) OR (   (zones.geomType='LineString' OR zones.geomType='Point') AND  ST_Intersects(ST_buffer(zones.geom,(zones.rayon/100000)),ST_GeomFromText(CONCAT('POINT(',"
                            + longitude
                            + ",' ',"
                            + latitude
                            + ",')'),4326)))) "

我真的卡住了,我應該加入兩個數據框還是什么? 我嘗試使用 id 和 idZone 加入兩個數據框,如下所示:

     df_tracking.select(explode(col("idZones").as ("idZones"))).join(df_zones,col("idZones").equalTo(df_zones.col("id")));

但在我看來,加入並不是正確的選擇。

我需要你幫忙。

謝謝

您可以將df_tracking.idZones eg: [20, 140, 39]轉換為Array()類型並使用array_contains() ,它可以在連接一系列元素時使事情變得更簡單。

val joinDF = df_zones.join(df_tracking, array_contains($"id_Zones",$"id"))

示例代碼:

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._

object JoinExample extends App{

val spark = SparkSession.builder()
    .master("local[8]")
    .appName("Example")
    .getOrCreate()


  import spark.implicits._

val df_zones = Seq(
      (30,"Polygon", "[00 00 00 00 01]",200),
      (32,"Point", "[00 00 00 00 01]",320179),
      (39,"Point", "[00 00 00 00 01]",320179)
      ).toDF("id","geomType","geom","rayon")

val df_tracking = Seq(
      (Array(30,50,100),"-7.6198783","33.5942549"),
      (Array(20,140,39),"-7.6198783","33.5942549"))
  .toDF("id_Zones","Longitude","Latitude")

  df_zones.show()
  df_tracking.show()


  val joinDF = df_zones.join(df_tracking, array_contains($"id_Zones",$"id"))
  joinDF.show()

Output:

+---+--------+----------------+------+
| id|geomType|            geom| rayon|
+---+--------+----------------+------+
| 30| Polygon|[00 00 00 00 01]|   200|
| 32|   Point|[00 00 00 00 01]|320179|
| 39|   Point|[00 00 00 00 01]|320179|
+---+--------+----------------+------+

+-------------+----------+----------+
|     id_Zones| Longitude|  Latitude|
+-------------+----------+----------+
|[30, 50, 100]|-7.6198783|33.5942549|
|[20, 140, 39]|-7.6198783|33.5942549|
+-------------+----------+----------+

+---+--------+----------------+------+-------------+----------+----------+
| id|geomType|            geom| rayon|     id_Zones| Longitude|  Latitude|
+---+--------+----------------+------+-------------+----------+----------+
| 30| Polygon|[00 00 00 00 01]|   200|[30, 50, 100]|-7.6198783|33.5942549|
| 39|   Point|[00 00 00 00 01]|320179|[20, 140, 39]|-7.6198783|33.5942549|
+---+--------+----------------+------+-------------+----------+----------+

編輯 1:在上面的延續中,可以通過定義SPARK UDF's代碼片段來最好地轉換查詢給你一個簡短的想法。

  // UDF Creation

  // Define Logic of (ST_WITHIN(ST_GeomFromText(CONCAT('POINT(', longitude, ' ', latitude, ')')
  // , 4326), zones.geom))
  val condition1 = (x:Int) => {1}

  // Define Logic of ST_Intersects(ST_buffer(zones.geom, (zones.rayon / 100000)),
  // ST_GeomFromText(CONCAT('POINT(', longitude, ' ', latitude, ')'), 4326))
  val condition2 = (y:Int) => {1}

  val condition1UDF = udf(condition1)
  val condition2UDF = udf(condition2)


  val joinDF = df_zones.join(df_tracking, array_contains($"id_Zones",$"id"))

  val finalDF = joinDF
      .withColumn("Condition1DerivedValue", condition1UDF(lit("000")))
      .withColumn("Condition2DerivedValue", condition2UDF(lit("000")))
      .filter(
        (col("geomType") === "Polygon" and col("Condition1DerivedValue") === 1 )
      or ((col("geomType")==="LineString" or col("geomType")==="Point")
          and $"Condition2DerivedValue" === 1
        )
      )
    .select("id","geomType","geom","rayon")

  finalDF.show()

Output:

+---+--------+----------------+------+
| id|geomType|            geom| rayon|
+---+--------+----------------+------+
| 30| Polygon|[00 00 00 00 01]|   200|
| 39|   Point|[00 00 00 00 01]|320179|
+---+--------+----------------+------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM