[英]How to apply filter on a column (with datatype array (of strings)) on a PySpark dataframe?
我有一个 PySpark dataframe:
df = spark.createDataFrame([
("u1", ['a', 'b']),
("u2", ['c', 'b']),
("u3", ['a', 'b']),
],
['user_id', 'features'])
print(df.printSchema())
df.show(truncate=False)
Output:
root
|-- user_id: string (nullable = true)
|-- features: array (nullable = true)
| |-- element: string (containsNull = true)
None
+-------+--------+
|user_id|features|
+-------+--------+
|u1 |[a, b] |
|u2 |[c, b] |
|u3 |[a, b] |
+-------+--------+
我只想保留名为features [a, b] 的列。 由于该列是字符串数组,因此不能使用简单过滤器。
我怎样才能做到这一点?
预期 output:
+-------+--------+
|user_id|features|
+-------+--------+
|u1 |[a, b] |
|u3 |[a, b] |
+-------+--------+
您可以使用array(lit(...))
import pyspark.sql.functions as F
df2 = df.filter(F.array_sort(F.col('features')) == F.array_sort(F.array(F.lit('a'), F.lit('b'))))
df2.show()
+-------+--------+
|user_id|features|
+-------+--------+
| u1| [a, b]|
| u3| [a, b]|
+-------+--------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.