簡體   English   中英

pyspark - 逐行讀取 df 以在另一個 df 中搜索

[英]pyspark - read df by row to search in another df

我是 pyspark 的新手,我需要幫助在 df 中進行搜索。
我有 df1 與學生數據如下

+---------+----------+--------------------+
|studentid|   course |  registration_date |
+---------+----------+--------------------+
|      348|         2|     15-11-2021     |
|      567|         1|     05-11-2021     |
|      595|         3|     15-10-2021     |
|      580|         2|     06-11-2021     |
|      448|         4|     15-09-2021     |
+---------+----------+--------------------+

df2。 有關注冊期的信息如下

+--------+------------+------------+
| period | start_date |  end_date  |
+--------+------------+------------+
|       1| 01-09-2021 | 15-09-2021 |
|       2| 16-09-2021 | 30-09-2021 |
|       3| 01-10-2021 | 15-10-2021 |
|       4| 16-10-2021 | 31-10-2021 |
|       5| 01-11-2021 | 15-11-2021 |
|       6| 16-11-2021 | 30-11-2021 |
+--------+------------+------------+

我需要逐行迭代 df1,獲取學生注冊日期並使用此日期,轉到 df2 並獲取條件 df2.start_date <= df1.registration_date <= df2.end_date 的期間信息。
結果將是新的 df 如下

+---------+----------+--------------------+--------+------------+------------+
|studentid|   course |  registration_date | period | start_date |  end_date  |
+---------+----------+--------------------+--------+------------+------------+
|      348|         2|     15-11-2021     |       5| 01-11-2021 | 15-11-2021 |
|      567|         1|     05-11-2021     |       5| 01-11-2021 | 15-11-2021 |
|      595|         3|     15-10-2021     |       3| 01-10-2021 | 15-10-2021 |
|      580|         2|     06-11-2021     |       5| 01-11-2021 | 15-11-2021 |
|      448|         4|     15-09-2021     |       1| 01-09-2021 | 15-09-2021 |
+---------+----------+--------------------+--------+------------+------------+

您可以將join條件指定為復雜條件。

工作示例

from datetime import datetime
from pyspark.sql import functions as F


df = spark.createDataFrame([
    (348, 2, datetime.strptime("15-11-2021", "%d-%m-%Y")),
    (567, 1, datetime.strptime("05-11-2021", "%d-%m-%Y")),
    (595, 3, datetime.strptime("15-10-2021", "%d-%m-%Y")),
    (580, 2, datetime.strptime("06-11-2021", "%d-%m-%Y")),
    (448, 4, datetime.strptime("15-09-2021", "%d-%m-%Y")),]
, ("studentid", "course", "registration_date",)).withColumn("registration_date", F.to_date(F.col("registration_date")))

df2 = spark.createDataFrame([
    (1, datetime.strptime("01-09-2021", "%d-%m-%Y"), datetime.strptime("15-09-2021", "%d-%m-%Y")),
    (2, datetime.strptime("16-09-2021", "%d-%m-%Y"), datetime.strptime("30-09-2021", "%d-%m-%Y")),
    (3, datetime.strptime("01-10-2021", "%d-%m-%Y"), datetime.strptime("15-10-2021", "%d-%m-%Y")),
    (4, datetime.strptime("16-10-2021", "%d-%m-%Y"), datetime.strptime("31-10-2021", "%d-%m-%Y")),
    (5, datetime.strptime("01-11-2021", "%d-%m-%Y"), datetime.strptime("15-11-2021", "%d-%m-%Y")),
    (6, datetime.strptime("16-11-2021", "%d-%m-%Y"), datetime.strptime("30-11-2021", "%d-%m-%Y")),]
, ("period", "start_date", "end_date")).withColumn("start_date", F.to_date(F.col("start_date"))).withColumn("end_date", F.to_date(F.col("end_date")))

df.join(df2, (df2["start_date"] <= df["registration_date"]) & (df["registration_date"] <= df2["end_date"])).show()

輸出

+---------+------+-----------------+------+----------+----------+
|studentid|course|registration_date|period|start_date|  end_date|
+---------+------+-----------------+------+----------+----------+
|      348|     2|       2021-11-15|     5|2021-11-01|2021-11-15|
|      567|     1|       2021-11-05|     5|2021-11-01|2021-11-15|
|      595|     3|       2021-10-15|     3|2021-10-01|2021-10-15|
|      448|     4|       2021-09-15|     1|2021-09-01|2021-09-15|
|      580|     2|       2021-11-06|     5|2021-11-01|2021-11-15|
+---------+------+-----------------+------+----------+----------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM