简体   繁体   中英

PySpark; Split a column of lists into multiple columns

This question is similar to the one already asked in Pandas here . I am using Google Cloud DataProc clusters for executing a function and hence can't convert them into pandas .

I would like to convert the following:

+----+----------------------------------+-----+---------+------+--------------------+-------------+
| key|                             value|topic|partition|offset|           timestamp|timestampType|
+----+----------------------------------+-----+---------+------+--------------------+-------------+
|null|["sepal_length","sepal_width",...]| iris|        0|   289|2021-04-11 22:32:...|            0|
|null|["5.0","3.5","1.3","0.3","setosa"]| iris|        0|   290|2021-04-11 22:32:...|            0|
|null|["4.5","2.3","1.3","0.3","setosa"]| iris|        0|   291|2021-04-11 22:32:...|            0|
|null|["4.4","3.2","1.3","0.2","setosa"]| iris|        0|   292|2021-04-11 22:32:...|            0|
|null|["5.0","3.5","1.6","0.6","setosa"]| iris|        0|   293|2021-04-11 22:32:...|            0|
|null|["5.1","3.8","1.9","0.4","setosa"]| iris|        0|   294|2021-04-11 22:32:...|            0|
|null|["4.8","3.0","1.4","0.3","setosa"]| iris|        0|   295|2021-04-11 22:32:...|            0|
+----+----------------------------------+-----+---------+------+--------------------+-------------+

Into something like this:

+--------------+-------------+--------------+-------------+-------+
| sepal_length | sepal_width | petal_length | petal_width | class |
+--------------+-------------+--------------+-------------+-------+
| 5.0          | 3.5         | 1.3          | 0.3         | setosa| 
| 4.5          | 2.3         | 1.3          | 0.3         | setosa| 
| 4.4          | 3.2         | 1.3          | 0.2         | setosa| 
| 5.0          | 3.5         | 1.6          | 0.6         | setosa| 
| 5.1          | 3.8         | 1.9          | 0.4         | setosa| 
| 4.8          | 3.0         | 1.4          | 0.3         | setosa| 
+--------------+-------------+--------------+-------------+-------+

How do I go about doing this? Any help would be greatly appreciated!

Gone the long way because relatively new to py spark. Happy to learn if there is a shorter way

  1. Recreated your dataframe in pandas

    df = pd.DataFrame({"value":['["sepal_length","sepal_width","petal_length","petal_width","class"]','["5.0","3.5","1.3","0.3","setosa"]','["4.5","2.3","1.3","0.3","setosa"]','["4.4","3.2","1.3","0.2","setosa"]']})

  2. Converted pandas datframe to sdf

    sdf = spark.createDataFrame(df)

  3. I strip the conner brackets and "

sdf = sdf.withColumn('value', regexp_replace(col('value'), '[\\[\\"\\]]', "")) sdf.show(truncate=False)

  1. I split datframe with ,

    df_split = sdf.select(f.split(sdf.value,",")).rdd.flatMap( lambda x: x).toDF(schema=["sepal_length","sepal_width","petal_length","petal_width","class"])

5: Filter out non digits

df_split = df_split.filter(df_split.sepal_length != "sepal_length")
df_split.show()


+------------+-----------+------------+-----------+------+
|sepal_length|sepal_width|petal_length|petal_width| class|
+------------+-----------+------------+-----------+------+
|         5.0|        3.5|         1.3|        0.3|setosa|
|         4.5|        2.3|         1.3|        0.3|setosa|
|         4.4|        3.2|         1.3|        0.2|setosa|
+------------+-----------+------------+-----------+------+

After a lot of searching, I finally wrote a code that solves it in a "dataproc" manner. The code is as follows:

from pyspark.sql import SparkSession, Row
from pyspark.sql.functions import split, explode, col, regexp_replace, udf
from pyspark.sql import functions as f

spark = SparkSession \
        .builder \
        .appName("appName") \
        .getOrCreate()

spark.sparkContext.setLogLevel("WARN")

df = spark \
     .readStream \
     .format("kafka") \
     .option("kafka.bootstrap.servers", "ip:port") \
     .option("subscribe", "topic-name") \
     .load()

data = df.select([c for c in df.columns if c in ["value", "offset"]])

def convertType(val):
    arr = val.decode("utf-8").split(",")
    print(arr[0], arr[1], arr[2], arr[3])
    print("="*50)
    arr[0], arr[1], arr[2], arr[3] = float(arr[0][2:-1]), float(arr[1][2:-1]), float(arr[2][2:-1]), float(arr[3][2:-1])
    arr[4] = arr[4][:-1]
    return arr

def get_sepal_length(arr):
    val = arr[0]
    return val

def get_sepal_width(arr):
    val = arr[1]
    return val

def get_petal_length(arr):
    val = arr[2]
    return val

def get_petal_width(arr):
    val = arr[3]
    return val

def get_classes(arr):
    val = arr[4][2:-1]
    return val    

convertUDF = udf(lambda z: convertType(z)) 
getSL = udf(lambda z: get_sepal_length(z))
getSW = udf(lambda z: get_sepal_width(z))
getPL = udf(lambda z: get_petal_length(z))
getPW = udf(lambda z: get_petal_width(z))
getC = udf(lambda z: get_classes(z))

df_new = data.select(col("offset"), \
    convertUDF(col("value")).alias("value"))

df_new = df_new.withColumn("sepal_length", getSL(col("value")).cast("float"))
df_new = df_new.withColumn("sepal_width", getSW(col("value")).cast("float"))
df_new = df_new.withColumn("petal_length", getPL(col("value")).cast("float"))
df_new = df_new.withColumn("petal_width", getPW(col("value")).cast("float"))
df_new = df_new.withColumn("classes", getC(col("value")))

query = df_new\
        .writeStream \
        .format("console") \
        .start()

query.awaitTermination()

Note that the arr[i][2:-1], ... is due to the format of the data in df.value . It was '"2.56" in my case. Dataproc is highly limiting and the lengthy udf approach was the best way I could find:).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM