简体   繁体   English

Pyspark:如何从 pyspark.sql.dataframe.DataFrame 中选择唯一的 ID 数据?

[英]Pyspark: how to select unique ID data from a pyspark.sql.dataframe.DataFrame?

I am quite new to spark language and pyspark .我对spark语言和pysparkpyspark I have a pyspark.sql.dataframe.DataFrame that looks like the following:我有一个pyspark.sql.dataframe.DataFrame

df.show()
+--------------------+----+----+---------+----------+---------+----------+---------+
|                  ID|Code|bool|      lat|       lon|       v1|        v2|       v3|
+--------------------+----+----+---------+----------+---------+----------+---------+
|5ac52674ffff34c98...|IDFA|   1|42.377167| -71.06994|17.422535|1525319638|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37747|-71.069824|17.683573|1525319639|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37757| -71.06942|22.287935|1525319640|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37761| -71.06943|19.110023|1525319641|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.377243| -71.06952|18.904774|1525319642|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378254| -71.06948|20.772903|1525319643|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37801| -71.06983|18.084948|1525319644|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378693| -71.07033| 15.64326|1525319645|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378723|-71.070335|21.093477|1525319646|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37868| -71.07034|21.851894|1525319647|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378716| -71.07029|20.583202|1525319648|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37872| -71.07067|19.738768|1525319649|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.379112| -71.07097|20.480911|1525319650|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37952|  -71.0708|20.526752|1525319651| 44.93808|
|5ac52674ffff34c98...|IDFA|   1| 42.37902| -71.07056|20.534052|1525319652| 44.93808|
|5ac52674ffff34c98...|IDFA|   1|42.380203|  -71.0709|19.921381|1525319653| 44.93808|
|5ac52674ffff34c98...|IDFA|   1| 42.37968|-71.071144| 20.12599|1525319654| 44.93808|
|5ac52674ffff34c98...|IDFA|   1|42.379696| -71.07114|18.760069|1525319655| 36.77853|
|5ac52674ffff34c98...|IDFA|   1| 42.38011| -71.07123|19.155525|1525319656| 36.77853|
|5ac52674ffff34c98...|IDFA|   1| 42.38022|  -71.0712|16.978994|1525319657| 36.77853|
+--------------------+----+----+---------+----------+---------+----------+---------+
only showing top 20 rows

I would like to extract the information of each unique user in a loop a transform it as a pandas dataframe.我想在循环中提取每个唯一用户的信息,并将其转换为熊猫数据帧。

For the first user, this what I am trying to:对于第一个用户,这就是我想要的:

id0 = df.first().ID
tmpDF = df.filter((fs.col('ID')==id0))

that it works, but it takes forever to transform it into a pandas dataframe它有效,但将其转换为熊猫数据框需要很长时间

tmpDF = tmpDF.toPandas()

you could convert spark df to pandas by using toPandas()您可以使用toPandas()将 spark df 转换为 pandas

unique_df = df.select('ID').distinct()

unique_pandas_df = unique_df.toPandas()

The following is what you are looking, df.select("ID").distinct().rdd.flatMap(lambda x: x).collect() gives you a list of unique ID using which you can filter your spark dataframe and toPandas() can be used to convert spark dataframe to pandas dataframe.以下是您正在查找的内容, df.select("ID").distinct().rdd.flatMap(lambda x: x).collect()为您提供了一个唯一ID列表,您可以使用它来filter df.select("ID").distinct().rdd.flatMap(lambda x: x).collect()toPandas()可用于将 spark 数据帧转换为toPandas()数据帧。

for i in df.select("ID").distinct().rdd.flatMap(lambda x: x).collect():
  tmp_df = df.filter(df.ID == i)
  user_pd_df = tmp_df.toPandas()

Update: As the question has been edited更新:由于问题已被编辑

toPandas() results in the collection of all records in the DataFrame to the driver program and should be done on a small subset of the data. toPandas()导致将 DataFrame 中的所有记录收集到驱动程序中,并且应该在数据的一个小子集上完成。 If you are trying to convert huge DataFrame into pandas it will take a considerate amount of time to do it.如果您尝试将巨大的 DataFrame 转换为 Pandas,则需要花费大量时间。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Pyspark:依靠 pyspark.sql.dataframe.DataFrame 需要很长时间 - Pyspark: count on pyspark.sql.dataframe.DataFrame takes long time 将pyspark.sql.dataframe.DataFrame类型转换为Dictionary - Convert pyspark.sql.dataframe.DataFrame type Dataframe to Dictionary Pyspark:如何将在线.gz日志文件加载到pyspark.sql.dataframe.DataFrame中 - Pyspark: how to load online .gz log file into pyspark.sql.dataframe.DataFrame 写一个pyspark.sql.dataframe.DataFrame不丢失信息 - Write a pyspark.sql.dataframe.DataFrame without losing information 如何将pyspark.sql.dataframe.DataFrame转换回databricks笔记本中的sql表 - How can I convert a pyspark.sql.dataframe.DataFrame back to a sql table in databricks notebook 如何在不使用 pandas on spark API 的情况下为 pyspark.sql.dataframe.DataFrame 编写这个 pandas 逻辑? - How to write this pandas logic for pyspark.sql.dataframe.DataFrame without using pandas on spark API? difference between pyspark.pandas.frame.DataFrame and pyspark.sql.dataframe.DataFrame and their conversion - difference between pyspark.pandas.frame.DataFrame and pyspark.sql.dataframe.DataFrame and their conversion 尝试在 Databricks 环境中合并或连接两个 pyspark.sql.dataframe.DataFrame - Trying to Merge or Concat two pyspark.sql.dataframe.DataFrame in Databricks Environment pyspark从pyspark sql数据帧创建字典数据 - pyspark create dictionary data from pyspark sql dataframe 如何使用pyspark从数据框中过滤数据 - How to filter data from a dataframe using pyspark
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM