简体   繁体   English

PySpark; 将一列列表拆分为多列

[英]PySpark; Split a column of lists into multiple columns

This question is similar to the one already asked in Pandas here .这个问题类似于 Pandas here中已经提出的问题。 I am using Google Cloud DataProc clusters for executing a function and hence can't convert them into pandas .我正在使用 Google Cloud DataProc 集群来执行 function ,因此无法将它们转换为pandas

I would like to convert the following:我想转换以下内容:

+----+----------------------------------+-----+---------+------+--------------------+-------------+
| key|                             value|topic|partition|offset|           timestamp|timestampType|
+----+----------------------------------+-----+---------+------+--------------------+-------------+
|null|["sepal_length","sepal_width",...]| iris|        0|   289|2021-04-11 22:32:...|            0|
|null|["5.0","3.5","1.3","0.3","setosa"]| iris|        0|   290|2021-04-11 22:32:...|            0|
|null|["4.5","2.3","1.3","0.3","setosa"]| iris|        0|   291|2021-04-11 22:32:...|            0|
|null|["4.4","3.2","1.3","0.2","setosa"]| iris|        0|   292|2021-04-11 22:32:...|            0|
|null|["5.0","3.5","1.6","0.6","setosa"]| iris|        0|   293|2021-04-11 22:32:...|            0|
|null|["5.1","3.8","1.9","0.4","setosa"]| iris|        0|   294|2021-04-11 22:32:...|            0|
|null|["4.8","3.0","1.4","0.3","setosa"]| iris|        0|   295|2021-04-11 22:32:...|            0|
+----+----------------------------------+-----+---------+------+--------------------+-------------+

Into something like this:变成这样:

+--------------+-------------+--------------+-------------+-------+
| sepal_length | sepal_width | petal_length | petal_width | class |
+--------------+-------------+--------------+-------------+-------+
| 5.0          | 3.5         | 1.3          | 0.3         | setosa| 
| 4.5          | 2.3         | 1.3          | 0.3         | setosa| 
| 4.4          | 3.2         | 1.3          | 0.2         | setosa| 
| 5.0          | 3.5         | 1.6          | 0.6         | setosa| 
| 5.1          | 3.8         | 1.9          | 0.4         | setosa| 
| 4.8          | 3.0         | 1.4          | 0.3         | setosa| 
+--------------+-------------+--------------+-------------+-------+

How do I go about doing this?我该怎么做呢? Any help would be greatly appreciated!任何帮助将不胜感激!

Gone the long way because relatively new to py spark.走了很长的路,因为 py spark 相对较新。 Happy to learn if there is a shorter way很高兴知道是否有更短的方法

  1. Recreated your dataframe in pandas在 pandas 中重新创建了您的 dataframe

    df = pd.DataFrame({"value":['["sepal_length","sepal_width","petal_length","petal_width","class"]','["5.0","3.5","1.3","0.3","setosa"]','["4.5","2.3","1.3","0.3","setosa"]','["4.4","3.2","1.3","0.2","setosa"]']})

  2. Converted pandas datframe to sdf将 pandas 数据帧转换为 sdf

    sdf = spark.createDataFrame(df)

  3. I strip the conner brackets and "我剥去角括号和"

sdf = sdf.withColumn('value', regexp_replace(col('value'), '[\\[\\"\\]]', "")) sdf.show(truncate=False)

  1. I split datframe with ,我用,分割 datframe

    df_split = sdf.select(f.split(sdf.value,",")).rdd.flatMap( lambda x: x).toDF(schema=["sepal_length","sepal_width","petal_length","petal_width","class"])

5: Filter out non digits 5:过滤掉非数字

df_split = df_split.filter(df_split.sepal_length != "sepal_length")
df_split.show()


+------------+-----------+------------+-----------+------+
|sepal_length|sepal_width|petal_length|petal_width| class|
+------------+-----------+------------+-----------+------+
|         5.0|        3.5|         1.3|        0.3|setosa|
|         4.5|        2.3|         1.3|        0.3|setosa|
|         4.4|        3.2|         1.3|        0.2|setosa|
+------------+-----------+------------+-----------+------+

After a lot of searching, I finally wrote a code that solves it in a "dataproc" manner.经过大量的搜索,我终于写了一个代码,以“dataproc”的方式解决它。 The code is as follows:代码如下:

from pyspark.sql import SparkSession, Row
from pyspark.sql.functions import split, explode, col, regexp_replace, udf
from pyspark.sql import functions as f

spark = SparkSession \
        .builder \
        .appName("appName") \
        .getOrCreate()

spark.sparkContext.setLogLevel("WARN")

df = spark \
     .readStream \
     .format("kafka") \
     .option("kafka.bootstrap.servers", "ip:port") \
     .option("subscribe", "topic-name") \
     .load()

data = df.select([c for c in df.columns if c in ["value", "offset"]])

def convertType(val):
    arr = val.decode("utf-8").split(",")
    print(arr[0], arr[1], arr[2], arr[3])
    print("="*50)
    arr[0], arr[1], arr[2], arr[3] = float(arr[0][2:-1]), float(arr[1][2:-1]), float(arr[2][2:-1]), float(arr[3][2:-1])
    arr[4] = arr[4][:-1]
    return arr

def get_sepal_length(arr):
    val = arr[0]
    return val

def get_sepal_width(arr):
    val = arr[1]
    return val

def get_petal_length(arr):
    val = arr[2]
    return val

def get_petal_width(arr):
    val = arr[3]
    return val

def get_classes(arr):
    val = arr[4][2:-1]
    return val    

convertUDF = udf(lambda z: convertType(z)) 
getSL = udf(lambda z: get_sepal_length(z))
getSW = udf(lambda z: get_sepal_width(z))
getPL = udf(lambda z: get_petal_length(z))
getPW = udf(lambda z: get_petal_width(z))
getC = udf(lambda z: get_classes(z))

df_new = data.select(col("offset"), \
    convertUDF(col("value")).alias("value"))

df_new = df_new.withColumn("sepal_length", getSL(col("value")).cast("float"))
df_new = df_new.withColumn("sepal_width", getSW(col("value")).cast("float"))
df_new = df_new.withColumn("petal_length", getPL(col("value")).cast("float"))
df_new = df_new.withColumn("petal_width", getPW(col("value")).cast("float"))
df_new = df_new.withColumn("classes", getC(col("value")))

query = df_new\
        .writeStream \
        .format("console") \
        .start()

query.awaitTermination()

Note that the arr[i][2:-1], ... is due to the format of the data in df.value .请注意, arr[i][2:-1], ...是由于df.value中的数据格式。 It was '"2.56" in my case.就我而言,它是'"2.56" Dataproc is highly limiting and the lengthy udf approach was the best way I could find:). udf有很大的限制,冗长的 udf 方法是我能找到的最好方法:)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM