[英]Merge two spark dataframes using array values
I have two Spark dataframes that look as following:我有两个 Spark 数据框,如下所示:
> cities_df
+----------+---------------------------+
| city_id| cities|
+----------+---------------------------+
| 22 |[Milan, Turin, Rome] |
+----------+---------------------------+
| 15 |[Naples, Florence, Genoa] |
+----------+---------------------------+
| 43 |[Houston, San Jose, Boston]|
+----------+---------------------------+
| 56 |[New York, Dallas, Chicago]|
+----------+---------------------------+
> countries_df
+----------+----------------------------------+
|country_id| countries|
+----------+----------------------------------+
| 680 |{'country': [56, 43], 'add': []} |
+----------+----------------------------------+
| 11 |{'country': [22, 15], 'add': [32]}|
+----------+----------------------------------+
Country values from the countries_df
are the city ids from the cities_df
dataframe. country_df 中的
countries_df
/地区值是cities_df
数据框中的城市 ID。
I need to merge these dataframes to replace the city id for country
with their values from the cities_df
dataframe.我需要合并这些数据框以将
country
/地区的城市 ID 替换为其来自cities_df
数据框的值。
Expected output:预期输出:
country_id ![]() |
countries![]() |
grouped_cities![]() |
---|---|---|
680 ![]() |
{'country': [56, 43], 'add': []} ![]() |
[New York, Dallas, Chicago, Houston, San Jose, Boston] ![]() |
11 ![]() |
{'country': [22, 15], 'add': [32]} ![]() |
[Milan, Turin, Rome, Naples, Florence, Genoa] ![]() |
Obtained grouped_cities
value doesn't have to be an array type, it can be just a string.获得
grouped_cities
值不一定是数组类型,可以是字符串。
How can I get this result using PySpark?如何使用 PySpark 获得此结果?
Inputs:输入:
from pyspark.sql import functions as F
cities_df = spark.createDataFrame(
[(22, ['Milan', 'Turin', 'Rome']),
(15, ['Naples', 'Florence', 'Genoa']),
(43, ['Houston', 'San Jose', 'Boston']),
(56, ['New York', 'Dallas', 'Chicago'])],
['city_id', 'cities']
)
countries_df = spark.createDataFrame(
[(680, {'country': [56, 43], 'add': []}),
(11, {'country': [22, 15], 'add': [32]})],
['country_id', 'countries']
)
Script:脚本:
df_expl = countries_df.withColumn('city_id', F.explode('countries.country'))
df_joined = df_expl.join(cities_df, 'city_id', 'left')
df = df_joined.groupBy('country_id').agg(
F.first('countries').alias('countries'),
F.flatten(F.collect_list('cities')).alias('grouped_cities')
)
df.show(truncate=0)
# +----------+----------------------------------+------------------------------------------------------+
# |country_id|countries |grouped_cities |
# +----------+----------------------------------+------------------------------------------------------+
# |11 |{add -> [32], country -> [22, 15]}|[Naples, Florence, Genoa, Milan, Turin, Rome] |
# |680 |{add -> [], country -> [56, 43]} |[Houston, San Jose, Boston, New York, Dallas, Chicago]|
# +----------+----------------------------------+------------------------------------------------------+
Anaother way of doing it.另一种方法。 Create a new column on countries_df using select.
使用 select 在 countries_df 上创建一个新列。 Groupby using country_id, and countries column cast as a string.
Groupby 使用 country_id,并将国家列转换为字符串。 Code below.
代码如下。
new =cities_df.join(countries_df.select('*',explode('countries.country').alias('city_id')), how='left', on='city_id').groupby('country_id',col('countries').cast('string').alias('countries')).agg(flatten(collect_set('cities')).alias('cities')).show(truncate=False)
+----------+----------------------------------+------------------------------------------------------+
|country_id|countries |cities |
+----------+----------------------------------+------------------------------------------------------+
|11 |{add -> [32], country -> [22, 15]}|[Milan, Turin, Rome, Naples, Florence, Genoa] |
|680 |{add -> [], country -> [56, 43]} |[New York, Dallas, Chicago, Houston, San Jose, Boston]|
+----------+----------------------------------+------------------------------------------------------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.