簡體   English   中英

Pyspark:如何從另一個 dataframe 向 dataframe 添加列?

[英]Pyspark: how to add a column to a dataframe from another dataframe?

我有兩個 10 行的數據框。

df1.show()
+-------------------+------------------+--------+-------+
|                lat|               lon|duration|stop_id|
+-------------------+------------------+--------+-------+
|  -6.23748779296875| 106.6937255859375|     247|      0|
|  -6.23748779296875| 106.6937255859375|    2206|      1|
|  -6.23748779296875| 106.6937255859375|     609|      2|
| 0.5733972787857056|101.45503234863281|   16879|      3|
| 0.5733972787857056|101.45503234863281|    4680|      4|
| -6.851855278015137|108.64261627197266|     164|      5|
| -6.851855278015137|108.64261627197266|     220|      6|
| -6.851855278015137|108.64261627197266|    1669|      7|
|-0.9033176600933075|100.41548919677734|   30811|      8|
|-0.9033176600933075|100.41548919677734|   23404|      9|
+-------------------+------------------+--------+-------+

我想將df2中的bank_and_post列添加到df1

df2來自 function。

def assignPtime(x, mu, std):
  mu = mu.values[0]
  std = std.values[0]
  x1 = np.random.normal(mu, std, 100000) 
  a1, b1 = np.histogram(x1, density=True)
  val = x / 60
  for k, v in enumerate(val):
    prob = 0
    for i,j in enumerate(b1[:-1]):
      v1 = b1[i]
      v2 = b1[i+1]
      if (v >= v1) and (v < v2):
        prob = a1[i]
    x[k] = prob
  return x

ff = pandas_udf(assignPtime, returnType=DoubleType())
df2 = df1.select(ff(col("duration"), lit(15), lit(15)).alias("ptime_bank_and_post"))
df2.show()
+--------------------+
|       bank_and_post|
+--------------------+
|0.021806558032484918|
|0.014366417828826784|
|0.021806558032484918|
|                 0.0|
|                 0.0|
|0.021806558032484918|
|0.021806558032484918|
|0.014366417828826784|
|                 0.0|
|                 0.0|
+--------------------+

如果我嘗試

df2 = df2.withColumn("stop_id", monotonically_increasing_id())

我得到錯誤

ValueError: assignment destination is read-only

使用row_number() window function 將新列添加到df1,df2數據幀,然后加入 row_number 列上的數據幀。

Example:

1. Using row_number function:

df1=spark.createDataFrame([(0,),(1,),(2,),(3,),(4,),(5,),(6,),(7,),(8,),(9,)],["stop_id"])

df2=spark.createDataFrame([("0.021806558032484918",),("0.014366417828826784",),("0.021806558032484918",),("                 0.0",),("                 0.0",),("0.021806558032484918",),("0.021806558032484918",),("0.014366417828826784",),("                 0.0",),("                 0.0",)],["bank_and_post"])

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.orderBy(lit(1))

df4=df2.withColumn("rn",row_number().over(w)-1)
df3=df1.withColumn("rn",row_number().over(w)-1)

df3.join(df4,["rn"]).drop("rn").show()

#+-------+--------------------+
#|stop_id|       bank_and_post|
#+-------+--------------------+
#|      0|0.021806558032484918|
#|      1|0.014366417828826784|
#|      2|0.021806558032484918|
#|      3|                 0.0|
#|      4|                 0.0|
#|      5|0.021806558032484918|
#|      6|0.021806558032484918|
#|      7|0.014366417828826784|
#|      8|                 0.0|
#|      9|                 0.0|
#+-------+--------------------+

2. Using monotonically_increasing_id() function:

df1.withColumn("mid",monotonically_increasing_id()).\
join(df2.withColumn("mid",monotonically_increasing_id()),["mid"]).\
drop("mid").\
orderBy("stop_id").\
show()
#+-------+--------------------+
#|stop_id|       bank_and_post|
#+-------+--------------------+
#|      0|0.021806558032484918|
#|      1|0.014366417828826784|
#|      2|0.021806558032484918|
#|      3|                 0.0|
#|      4|                 0.0|
#|      5|0.021806558032484918|
#|      6|0.021806558032484918|
#|      7|0.014366417828826784|
#|      8|                 0.0|
#|      9|                 0.0|
#+-------+--------------------+

3. Using row_number() on monotonically_increasing_id() function:

w=Window.orderBy("mid")
df3=df1.withColumn("mid",monotonically_increasing_id()).withColumn("rn",row_number().over(w) - 1)
df4=df2.withColumn("mid",monotonically_increasing_id()).withColumn("rn",row_number().over(w) - 1)
df3.join(df4,["rn"]).drop("rn","mid").show()

#+-------+--------------------+
#|stop_id|       bank_and_post|
#+-------+--------------------+
#|      0|0.021806558032484918|
#|      1|0.014366417828826784|
#|      2|0.021806558032484918|
#|      3|                 0.0|
#|      4|                 0.0|
#|      5|0.021806558032484918|
#|      6|0.021806558032484918|
#|      7|0.014366417828826784|
#|      8|                 0.0|
#|      9|                 0.0|
#+-------+--------------------+

4. Using zipWithIndex:

df3=df1.rdd.zipWithIndex().toDF().select("_1.*","_2")
df4=df2.rdd.zipWithIndex().toDF().select("_1.*","_2")
df3.join(df4,["_2"]).drop("_2").orderBy("stop_id").show()
#+-------+--------------------+
#|stop_id|       bank_and_post|
#+-------+--------------------+
#|      0|0.021806558032484918|
#|      1|0.014366417828826784|
#|      2|0.021806558032484918|
#|      3|                 0.0|
#|      4|                 0.0|
#|      5|0.021806558032484918|
#|      6|0.021806558032484918|
#|      7|0.014366417828826784|
#|      8|                 0.0|
#|      9|                 0.0|
#+-------+--------------------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM