簡體   English   中英

將 PySpark dataframe 過濾到數據幀列表中

[英]Filter PySpark dataframe into a list of dataframes

我有一個 PySpark dataframe ,我想根據某些列中的唯一值進行過濾。

from pyspark.sql import SparkSession
spark_session = SparkSession.builder.enableHiveSupport().getOrCreate()

columns = ["language","users_count","apple"]
data = [("Java", 1, 0.0), ("Scala", 4, -4.0), ("Java", 1, 0.0)]

pyspark_df = spark_session.createDataFrame(data).toDF(*columns)

pandas_df = pd.DataFrame(data, columns=columns)

# Operation I want to replicate in PySpark:
column_list = ['language','users_count'] #these names and number of columns can be changed at runtime.
unique_dfs = [df for id, df in pandas_df.groupby(column_list
, as_index=False)]

可以完成的另一種方法是在 PySpark df 中創建一個列,並放置唯一值(字符串(語言 + users_count),然后過濾這些唯一值以獲取 dfs。

如果你確切地知道你需要什么數據,你應該做filter ,因為它在 Spark 中很有效。

from pyspark.sql import functions as F

df = pyspark_df.filter(
    (F.col('language') == 'Java') &
    (F.col('users_count') == 1)
)

如果您真的需要這些列的所有可能組合作為單獨的數據幀,您將不得不運行distinct (即要避免的隨機播放)和低效的collect

from pyspark.sql import functions as F

column_list = ['language', 'users_count']
df_dist = pyspark_df.select(column_list).distinct()
unique_dfs = []
for row in df_dist.collect():
    cond = F.lit(True)
    for c in column_list:
        cond &= (F.col(c) == row[c])
    unique_dfs.append(pyspark_df.filter(cond))

結果:

unique_dfs[0].show()
# +--------+-----------+-----+
# |language|users_count|apple|
# +--------+-----------+-----+
# |    Java|          1|  0.0|
# |    Java|          1|  0.0|
# +--------+-----------+-----+

unique_dfs[1].show()
# +--------+-----------+-----+
# |language|users_count|apple|
# +--------+-----------+-----+
# |   Scala|          4| -4.0|
# +--------+-----------+-----+

unique_dfs[0].explain()
# == Physical Plan ==
# *(1) Project [_1#158 AS language#164, _2#159L AS users_count#165L, _3#160 AS apple#166]
# +- *(1) Filter ((isnotnull(_1#158) AND isnotnull(_2#159L)) AND ((_1#158 = Java) AND (_2#159L = 1)))
#    +- *(1) Scan ExistingRDD[_1#158,_2#159L,_3#160]

注意:這里你看到 Java 被索引為 0,Scala 被索引為 1,但實際上它可能相反,你在那里沒有確定性,因為你不知道哪個 executor 會先將他的數據發送給 driver 在 driver 之后使用collect時要求提供數據。 所以,你問的,可能不是你真正需要的。

使用對列進行分區的窗口函數創建排名(您希望根據值進行分組)。 然后從 1 迭代到 df.count() 並根據排名過濾數據幀並將數據幀存儲到列表中。 我希望這有幫助!

from pyspark.sql import functions as F, Window as W

column_list = ['language', 'users_count']
unique_dfs = []
w = W.orderBy(*column_list)
df = pyspark_df.withColumn('_rank', F.dense_rank().over(w))
for i in range(1, df.agg(F.max('_rank')).head()[0] + 1):
    unique_dfs.append(df.filter(F.col('_rank') == i))

結果:

unique_dfs[0].show()
# +--------+-----------+-----+-----+
# |language|users_count|apple|_rank|
# +--------+-----------+-----+-----+
# |    Java|          1|  0.0|    1|
# |    Java|          1|  0.0|    1|
# +--------+-----------+-----+-----+

unique_dfs[1].show()
# +--------+-----------+-----+-----+
# |language|users_count|apple|_rank|
# +--------+-----------+-----+-----+
# |   Scala|          4| -4.0|    2|
# +--------+-----------+-----+-----+

unique_dfs[0].explain()
# == Physical Plan ==
# AdaptiveSparkPlan isFinalPlan=false
# +- Filter (_rank#579 = 1)
#    +- Window [dense_rank(language#571, users_count#572L) windowspecdefinition(language#571 ASC NULLS FIRST, users_count#572L ASC NULLS FIRST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS _rank#579], [language#571 ASC NULLS FIRST, users_count#572L ASC NULLS FIRST]
#       +- Sort [language#571 ASC NULLS FIRST, users_count#572L ASC NULLS FIRST], false, 0
#          +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#975]
#             +- Project [_1#565 AS language#571, _2#566L AS users_count#572L, _3#567 AS apple#573]
#                +- Scan ExistingRDD[_1#565,_2#566L,_3#567]

我已經解決了這個問題

groups = list(pyspark_df.select(['language','users_count']).distinct().collect())

unique_campaigns_dfs = [
    pyspark_df.where((functions.col('language') == x[0]) & (functions.col('users_count') == x[1])) for x in
    groups]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM