繁体   English   中英

如何将行显示为 pyspark 数据框中的字典?

[英]How to display row as dictionary from pyspark dataframe?

pyspark 的新手。

我有 2 个数据集, EventsGadget 他们看起来像这样:

Events

在此处输入图像描述

Gadgets

在此处输入图像描述

我可以像这样使用来读取和加入 2 个数据帧,并在我的最后一行中只显示所需的列:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType,StructField, StringType, IntegerType 
from pyspark.sql.types import ArrayType, DoubleType, BooleanType
from pyspark.sql.functions import col,array_contains

spark = SparkSession.builder.appName('PySpark Read CSV').getOrCreate()

# Reading csv file
events = spark.read.option("header",True).csv("events.csv")
events.printSchema()


gadgets = spark.read.option("header",True).csv("gadgets.csv")
gadgets.printSchema()


enrich = events.join(gadgets, events.deviceId == gadgets.ID).select(events["*"],gadgets["User"])

我的任务是要求我在字典对象中像这样呈现数据:

充实任务:

  • 使用设备提供的用户数据丰富事件对象。
  • 确保丰富的事件如下所示:
{
    sessionId: string
    deviceId: string
    timestamp: timestamp
    type: emun(ADDED_TO_CART | APP_OPENED)
    total_price: 50.00
    user: string
}

我可以处理分配要求的 dtype 更改和列名重命名,但是如何以上面的字典格式提供我的结果?

如果我使用此行,我不确定如何显示我的结果:

enrich.rdd.map(lambda row: row.asDict())

使用create_map()函数创建每列及其值的(键,值)对。

create_map需要以形式输入 (key1, value1, key2, value2, ...)。 为此,请使用itertools.chain()

df = spark.createDataFrame(data=[["sess1","dev1","2022-12-19","emun(ADDED_TO_CART | APP_OPENED)","50.00","usr1"],["sess2","dev2","2022-12-18","emun(ADDED_TO_CART | APP_OPENED)","100.00","usr2"]], schema=["sessionId","deviceId","timestamp","type","total_price","user"])

import pyspark.sql.functions as F
import itertools

df = df.withColumn("map", \
                   F.create_map( \
                       list(itertools.chain( \
                           *((F.lit(x), F.col(x)) for x in df.columns) \
                       )) \
                   ))

df.show(truncate=False)

输出:

+---------+--------+----------+--------------------------------+-----------+----+----------------------------------------------------------------------------------------------------------------------------------------------+
|sessionId|deviceId|timestamp |type                            |total_price|user|map                                                                                                                                           |
+---------+--------+----------+--------------------------------+-----------+----+----------------------------------------------------------------------------------------------------------------------------------------------+
|sess1    |dev1    |2022-12-19|emun(ADDED_TO_CART | APP_OPENED)|50.00      |usr1|{sessionId -> sess1, deviceId -> dev1, timestamp -> 2022-12-19, type -> emun(ADDED_TO_CART | APP_OPENED), total_price -> 50.00, user -> usr1} |
|sess2    |dev2    |2022-12-18|emun(ADDED_TO_CART | APP_OPENED)|100.00     |usr2|{sessionId -> sess2, deviceId -> dev2, timestamp -> 2022-12-18, type -> emun(ADDED_TO_CART | APP_OPENED), total_price -> 100.00, user -> usr2}|
+---------+--------+----------+--------------------------------+-----------+----+----------------------------------------------------------------------------------------------------------------------------------------------+

您还可以使用以下方法将其收集为json

df = df.withColumn("json", F.to_json("map"))

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM