[英]pyspark: get the distinct elements of list values
I have an rdd in this form,我有一个这种形式的 rdd,
rdd = sc.parallelize([('A', [1, 2, 4, 1, 2, 5]), ('B', [2, 3, 2, 1, 5, 10]), ('C', [3, 2, 5, 10, 5, 2])])
but I want to transformed the rdd like below,但我想像下面那样转换 rdd,
newrdd = [('A', [1, 2, 4, 5]), ('B', [2, 3, 1, 5, 10], ('C', [3, 2, 5, 10])]
meaning, I have to get the distinct elements of values.意思是,我必须获得不同的价值观元素。
ReduceByKey()
doesnt help here. ReduceByKey()
在这里没有帮助。
how can I achieve this?我怎样才能做到这一点?
Since Spark 2.4 you can use the PySpark SQL function array_distinct
:从 Spark 2.4 开始,您可以使用 PySpark SQL 函数
array_distinct
:
df = rdd.toDF(("category", "values"))
df.withColumn("foo", array_distinct(col("values"))).show()
+--------+-------------------+----------------+
|category| values| foo|
+--------+-------------------+----------------+
| A| [1, 2, 4, 1, 2, 5]| [1, 2, 4, 5]|
| B|[2, 3, 2, 1, 5, 10]|[2, 3, 1, 5, 10]|
| C|[3, 2, 5, 10, 5, 2]| [3, 2, 5, 10]|
+--------+-------------------+----------------+
It has the advantage of not converting the JVM objects to Python objects and is therefore more efficient than any Python UDF.它的优点是不将 JVM 对象转换为 Python 对象,因此比任何 Python UDF 都更有效。 However, it's a DataFrame function, so you must convert the RDD to a DataFrame.
但是,它是一个 DataFrame 函数,因此您必须将 RDD 转换为 DataFrame。 That's also recommended for most cases.
大多数情况下也建议这样做。
Here is a direct way to get the result in Python.这是在 Python 中获取结果的直接方法。 Note that the RDDs are immutable.
请注意,RDD 是不可变的。
Setup Spark Session/Context设置 Spark 会话/上下文
from pyspark.sql import SparkSession
from pyspark import SparkContext
spark = SparkSession.builder \
.master("local") \
.appName("SO Solution") \
.getOrCreate()
sc = spark.sparkContext
Solution Code解决方案代码
rdd = sc.parallelize([('A', [1, 2, 4, 1, 2, 5]), ('B', [2, 3, 2, 1, 5, 10]), ('C', [3, 2, 5, 10, 5, 2])])
newrdd = rdd.map(lambda x : (x[0], list(set(x[1]))))
newrdd.collect()
Output输出
[('A', [1, 2, 4, 5]), ('B', [1, 2, 3, 5, 10]), ('C', [10, 2, 3, 5])]
You can convert the array to set to get distinct values.您可以将数组转换为 set 以获得不同的值。 Here is how - I have changed the syntax a little bit to use scala.
这是如何 - 我已经稍微改变了语法以使用 scala。
val spark : SparkSession = SparkSession.builder
.appName("Test")
.master("local[2]")
.getOrCreate()
import spark.implicits._
val df = spark.createDataset(List(("A", Array(1, 2, 4, 1, 2, 5)), ("B", Array(2, 3, 2, 1, 5, 10)), ("C", Array(3, 2, 5, 10, 5, 2))))
df.show()
val dfDistinct = df.map(r=> (r._1, r._2.toSet) )
dfDistinct.show()
old_rdd = [('A', [1, 2, 4, 1, 2, 5]), ('B', [2, 3, 2, 1, 5, 10]), ('C', [3, 2, 5, 10, 5, 2])]
new_rdd = [(letter, set(numbers)) for letter, numbers in old_rdd]
Like this?像这样?
Or list(set(numbers))
if you really need them to be a list?或者
list(set(numbers))
如果你真的需要它们成为一个列表?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.