簡體   English   中英

Pyspark:如何從 pyspark.sql.dataframe.DataFrame 中選擇唯一的 ID 數據?

[英]Pyspark: how to select unique ID data from a pyspark.sql.dataframe.DataFrame?

我對spark語言和pysparkpyspark 我有一個pyspark.sql.dataframe.DataFrame

df.show()
+--------------------+----+----+---------+----------+---------+----------+---------+
|                  ID|Code|bool|      lat|       lon|       v1|        v2|       v3|
+--------------------+----+----+---------+----------+---------+----------+---------+
|5ac52674ffff34c98...|IDFA|   1|42.377167| -71.06994|17.422535|1525319638|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37747|-71.069824|17.683573|1525319639|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37757| -71.06942|22.287935|1525319640|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37761| -71.06943|19.110023|1525319641|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.377243| -71.06952|18.904774|1525319642|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378254| -71.06948|20.772903|1525319643|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37801| -71.06983|18.084948|1525319644|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378693| -71.07033| 15.64326|1525319645|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378723|-71.070335|21.093477|1525319646|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37868| -71.07034|21.851894|1525319647|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378716| -71.07029|20.583202|1525319648|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37872| -71.07067|19.738768|1525319649|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.379112| -71.07097|20.480911|1525319650|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37952|  -71.0708|20.526752|1525319651| 44.93808|
|5ac52674ffff34c98...|IDFA|   1| 42.37902| -71.07056|20.534052|1525319652| 44.93808|
|5ac52674ffff34c98...|IDFA|   1|42.380203|  -71.0709|19.921381|1525319653| 44.93808|
|5ac52674ffff34c98...|IDFA|   1| 42.37968|-71.071144| 20.12599|1525319654| 44.93808|
|5ac52674ffff34c98...|IDFA|   1|42.379696| -71.07114|18.760069|1525319655| 36.77853|
|5ac52674ffff34c98...|IDFA|   1| 42.38011| -71.07123|19.155525|1525319656| 36.77853|
|5ac52674ffff34c98...|IDFA|   1| 42.38022|  -71.0712|16.978994|1525319657| 36.77853|
+--------------------+----+----+---------+----------+---------+----------+---------+
only showing top 20 rows

我想在循環中提取每個唯一用戶的信息,並將其轉換為熊貓數據幀。

對於第一個用戶,這就是我想要的:

id0 = df.first().ID
tmpDF = df.filter((fs.col('ID')==id0))

它有效,但將其轉換為熊貓數據框需要很長時間

tmpDF = tmpDF.toPandas()

您可以使用toPandas()將 spark df 轉換為 pandas

unique_df = df.select('ID').distinct()

unique_pandas_df = unique_df.toPandas()

以下是您正在查找的內容, df.select("ID").distinct().rdd.flatMap(lambda x: x).collect()為您提供了一個唯一ID列表,您可以使用它來filter df.select("ID").distinct().rdd.flatMap(lambda x: x).collect()toPandas()可用於將 spark 數據幀轉換為toPandas()數據幀。

for i in df.select("ID").distinct().rdd.flatMap(lambda x: x).collect():
  tmp_df = df.filter(df.ID == i)
  user_pd_df = tmp_df.toPandas()

更新:由於問題已被編輯

toPandas()導致將 DataFrame 中的所有記錄收集到驅動程序中,並且應該在數據的一個小子集上完成。 如果您嘗試將巨大的 DataFrame 轉換為 Pandas,則需要花費大量時間。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM