繁体   English   中英

在数据框中使用用户定义的值添加新列。 (pyspark)

[英]Add a new column in dataframe with user defined values. (pyspark)

数组A1的三个值来自某个函数-

A1 = [1,2,3,4]
A1 = [5,6,7,8]
A1 = [1,3,4,1]

我想在其中添加带有数组值的新列的数据框-

+---+---+-----+
| x1| x2|   x3|
+---+---+-----+
|  1|  A|  3.0|
|  2|  B|-23.0|
|  3|  C| -4.0|
+---+---+-----+

我这样尝试过(假设'df'是我的数据框)-

for i  in range(0, 2):
   df = df.withColumn("x4", array(lit(A1[0]), lit(A1[1]), lit(A1[2]))

但是此代码的问题是它正在使用数组'A1'的最后一个值更新列,如下所示:

+---+---+-----+---------+
| x1| x2|   x3|       x4|
+---+---+-----+---------+
|  1|  A|  3.0|[1,3,4,1]|
|  2|  B|-23.0|[1,3,4,1]|
|  3|  C| -4.0|[1,3,4,1]|
+---+---+-----+---------+

但是我想要这样-

+---+---+-----+---------+
| x1| x2|   x3|       x4|
+---+---+-----+---------+
|  1|  A|  3.0|[1,2,3,4]|
|  2|  B|-23.0|[5,6,7,8]|
|  3|  C| -4.0|[1,3,4,1]|
+---+---+-----+---------+

我需要在代码中添加些什么?

怎么样:

from pyspark.sql import SparkSession
import pandas as pd

spark = SparkSession.builder.appName('test').getOrCreate()
df=spark.createDataFrame(data=[(1,'A',3),(2,'B',-23),(3,'C',-4)],schema=['x1','x2','x3'])

+---+---+---+
| x1| x2| x3|
+---+---+---+
|  1|  A|  3|
|  2|  B|-23|
|  3|  C| -4|
+---+---+---+

mydict = {1:[1,2,3,4] , 2:[5,6,7,8], 3:[1,3,4,1]}

def addExtraColumn(df,mydict):
    names = df.schema.names
    count=1
    mylst=[]
    for row in df.rdd.collect():
        RW=row.asDict()
        rowLst=[]
        for name in names:
            rowLst.append(RW[name])
        rowLst.append(mydict[count])
        count=count+1
        mylst.append(rowLst)
    return mylst

newlst = addExtraColumn(df,mydict)

df1 = spark.sparkContext.parallelize(newlst).toDF(['x1','x2','x3','x4'])

df1.show()

+---+---+---+------------+
| x1| x2| x3|          x4|
+---+---+---+------------+
|  1|  A|  3|[1, 2, 3, 4]|
|  2|  B|-23|[5, 6, 7, 8]|
|  3|  C| -4|[1, 3, 4, 1]|
+---+---+---+------------+

查看您的代码,我认为A1值取决于x1,x2或x3列中的至少一个。

因此,您不能使用A1定义新列,而是使用将需要定义A1的列作为参数的函数。

这只是一个假设,但也许您只需要字典, A = {1:[1,2,3,4] , 2:[5,6,7,8], 3:[1,3,4,1],}然后将其与withColumn一起用于UDF。

因此,在大惊小怪之后,我发现使用pyspark的withColumn函数无法完成此操作,因为它将创建一列,但所有行都相同。 而且我不能使用udf因为我的新列不依赖于现有数据框的任何先前列。

所以我做了这样的事情-假设您在for循环中获得了数组A1的不同值(在我的情况下就是这种情况)

f_array = []
for i in range(0,10):
   f_array.extend([(i, A1)])

# Creating a new df for my array.

df1 = spark.createDataFrame(data = f_array, schema = ["id", "x4"])
df1.show()

+---+---------+
| id|       x4|
+---+---------+
|  0|[1,2,3,4]|
|  1|[5,6,7,8]|
|  2|[1,3,4,1]|
+---+---------+
# Suppose no columns matches to our df then creating one extra column named `id` as present in our `df1`. This is used for joining both the dataframes.

df = df.withColumn('id', monotonically_increasing_id())
df.show()

+---+---+---+-----+
| id| x1| x2|   x3|
+---+---+---+-----+
|  0|  1|  A|  3.0|
|  1|  2|  B|-23.0|
|  2|  3|  C| -4.0|
+---+---+---+-----+

# Now join both the dataframes using common column `id`.

df = df.join(df1, df.id == df1.id).drop(df.id).drop(df1.id)
df.show()

+---+---+---+------------+
| x1| x2| x3|          x4|
+---+---+---+------------+
|  1|  A|  3|[1, 2, 3, 4]|
|  2|  B|-23|[5, 6, 7, 8]|
|  3|  C| -4|[1, 3, 4, 1]|
+---+---+---+------------+

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM