I have 2 dataframes
df1=
+--------------+
|questions |
+--------------+
|[Q1, Q2] |
|[Q4, Q6, Q7] |
|... |
+---+----------+
df2 =
+--------------------+---+---+---+---+
| Q1| Q2| Q3| Q4| Q6| Q7 | ... |Q25|
+--------------------+---+---+---+---+
| 1| 0| 1| 0| 0| 1 | ... | 1|
+--------------------+---+---+---+---+
I'd like to add in the first dataframe a new colum with the value of all columns defined into df1.questions
.
Expected result
df1 =
+--------------++--------------+
|questions |values
+--------------+---------------+
|[Q1, Q2] |[1, 0] |
|[Q4, Q6, Q7] |[0, 0, 1] |
|... | |
+---+----------++--------------+
When I do
cols_to_link = ['Q1', 'Q2']
df2= df2.select([col for col in cols_to_link])\
df2 = df2.withColumn('value', F.concat_ws(", ", *df2.columns))
the additionnal column is what I want, but I can't do it by mixing dataframes
It also works when I'm with df2
df2 = df2.select([col for col in df1.select('questions').collect()[0][0]])\
df2 = df2.withColumn('value', F.concat_ws(", ", *df2.columns))
But not when I want to go from df1
df1= df1\
.withColumn('value', F.concat_ws(", ", *df2.select([col for col in df1.select('questions').collect()])))
Where I'm wrong?
From my example dataframes,
# df1
+------------+
| questions|
+------------+
| [Q1, Q2]|
|[Q4, Q6, Q7]|
+------------+
# df2
+---+---+---+---+---+---+
| Q1| Q2| Q3| Q4| Q6| Q7|
+---+---+---+---+---+---+
| 1| 0| 1| 0| 0| 1|
+---+---+---+---+---+---+
I have create the vertical dataframe and to join. You cannot refer the columns from the other dataframe in general.
cols = df2.columns
df = df2.rdd.flatMap(lambda row: [[cols[i], row[i]] for i in range(0, len(row))]).toDF(['id', 'values'])
df.show()
+---+------+
| id|values|
+---+------+
| Q1| 1|
| Q2| 0|
| Q3| 1|
| Q4| 0|
| Q6| 0|
| Q7| 1|
+---+------+
df1.join(df, f.expr('array_contains(questions, id)'), 'left') \
.groupBy('questions').agg(f.collect_list('values').alias('values')) \
.show()
+------------+---------+
| questions| values|
+------------+---------+
| [Q1, Q2]| [1, 0]|
|[Q4, Q6, Q7]|[0, 0, 1]|
+------------+---------+
creating dataframe
a = spark.createDataFrame([
("1", "0", "0","A"),
("1", "0", "2","B"),
("1", "1", "2","C"),
("1", "1", "3","H"),
("1", "2", "2","D"),
("1", "2", "2","E")
], ["val1", "val2", "val3","val4"])
create a list and explode and get counts.
df_a= a.withColumn('arr_val', array(col('val1'),col('val2'),col('val3')) )
df_b = df_a.withColumn('repeats', explode(col('arr_val')) ).\
groupby(['val1','val2','val3','repeats']).count().\
filter(col('count')>1)
df_a
+----+----+----+----+---------+
|val1|val2|val3|val4|arr_val |
+----+----+----+----+---------+
|1 |0 |0 |A |[1, 0, 0]|
|1 |0 |2 |B |[1, 0, 2]|
|1 |1 |2 |C |[1, 1, 2]|
|1 |1 |3 |H |[1, 1, 3]|
|1 |2 |2 |D |[1, 2, 2]|
|1 |2 |2 |E |[1, 2, 2]|
+----+----+----+----+---------+
df_b
+----+----+----+-------+-----+
|val1|val2|val3|repeats|count|
+----+----+----+-------+-----+
| 1| 0| 0| 0| 2|
| 1| 2| 2| 2| 2|
| 1| 1| 3| 1| 2|
| 1| 1| 2| 1| 2|
+----+----+----+-------+-----+
I do feel this unoptimized.
if some can optimize using expr('filter(arr_val, x-> Count(x))')
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.