簡體   English   中英

使用 pyspark 從字典映射數據框中的值

[英]map values in a dataframe from a dictionary using pyspark

我想知道如何映射數據框中特定列中的值。

我有一個數據框,它看起來像:

df = sc.parallelize([('india','japan'),('usa','uruguay')]).toDF(['col1','col2'])

+-----+-------+
| col1|   col2|
+-----+-------+
|india|  japan|
|  usa|uruguay|
+-----+-------+

我有一本字典,我想從中映射值。

dicts = sc.parallelize([('india','ind'), ('usa','us'),('japan','jpn'),('uruguay','urg')])

我想要的輸出是:

+-----+-------+--------+--------+
| col1|   col2|col1_map|col2_map|
+-----+-------+--------+--------+
|india|  japan|     ind|     jpn|
|  usa|uruguay|      us|     urg|
+-----+-------+--------+--------+

我曾嘗試使用lookup function但它不起作用。 它拋出錯誤 SPARK-5063。 以下是我失敗的方法:

def map_val(x):
    return dicts.lookup(x)[0]

myfun = udf(lambda x: map_val(x), StringType())

df = df.withColumn('col1_map', myfun('col1')) # doesn't work
df = df.withColumn('col2_map', myfun('col2')) # doesn't work

我認為更簡單的方法就是使用簡單的dictionarydf.withColumn

from itertools import chain
from pyspark.sql.functions import create_map, lit

simple_dict = {'india':'ind', 'usa':'us', 'japan':'jpn', 'uruguay':'urg'}

mapping_expr = create_map([lit(x) for x in chain(*simple_dict.items())])

df = df.withColumn('col1_map', mapping_expr[df['col1']])\
       .withColumn('col2_map', mapping_expr[df['col2']])

df.show(truncate=False)

udf方式

我建議你將元組列表更改為dicts廣播它以在udf中使用

dicts = sc.broadcast(dict([('india','ind'), ('usa','us'),('japan','jpn'),('uruguay','urg')]))

from pyspark.sql import functions as f
from pyspark.sql import types as t
def newCols(x):
    return dicts.value[x]

callnewColsUdf = f.udf(newCols, t.StringType())

df.withColumn('col1_map', callnewColsUdf(f.col('col1')))\
    .withColumn('col2_map', callnewColsUdf(f.col('col2')))\
    .show(truncate=False)

哪個應該給你

+-----+-------+--------+--------+
|col1 |col2   |col1_map|col2_map|
+-----+-------+--------+--------+
|india|japan  |ind     |jpn     |
|usa  |uruguay|us      |urg     |
+-----+-------+--------+--------+

加入方式(比udf方式慢)

您所要做的就是將dicts rdd更改為dataframe,使用兩個帶有別名的 連接 ,如下所示

df = sc.parallelize([('india','japan'),('usa','uruguay')]).toDF(['col1','col2'])

dicts = sc.parallelize([('india','ind'), ('usa','us'),('japan','jpn'),('uruguay','urg')]).toDF(['key', 'value'])

from pyspark.sql import functions as f
df.join(dicts, df['col1'] == dicts['key'], 'inner')\
    .select(f.col('col1'), f.col('col2'), f.col('value').alias('col1_map'))\
    .join(dicts, df['col2'] == dicts['key'], 'inner') \
    .select(f.col('col1'), f.col('col2'), f.col('col1_map'), f.col('value').alias('col2_map'))\
    .show(truncate=False)

哪個應該給你相同的結果

與 Ali AzG 類似,但如果有人覺得有用,可以將其全部提取為一個方便的小方法

from itertools import chain
from pyspark.sql import DataFrame
from pyspark.sql import functions as F
from typing import Dict

def map_column_values(df:DataFrame, map_dict:Dict, column:str, new_column:str="")->DataFrame:
    """Handy method for mapping column values from one value to another

    Args:
        df (DataFrame): Dataframe to operate on 
        map_dict (Dict): Dictionary containing the values to map from and to
        column (str): The column containing the values to be mapped
        new_column (str, optional): The name of the column to store the mapped values in. 
                                    If not specified the values will be stored in the original column

    Returns:
        DataFrame
    """
    spark_map = F.create_map([F.lit(x) for x in chain(*map_dict.items())])
    return df.withColumn(new_column or column, spark_map[df[column]])

這可以如下使用

from pyspark.sql import Row, SparkSession
spark = SparkSession.builder.master("local[3]").getOrCreate()
df = spark.createDataFrame([Row(A=0), Row(A=1)])
df = map_column_values(df, map_dict={0:"foo", 1:"bar"}, column="A", new_column="B")
df.show()
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
#+---+---+
#|  A|  B|
#+---+---+
#|  0|foo|
#|  1|bar|
#+---+---+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM