簡體   English   中英

如何在spark中使用逗號分隔符將相同的列值連接到新列

[英]how to concat the same column value to a new column with comma delimiters in spark

輸入數據的格式如下:

+--------------------+-------------+--------------------+
|           date     |       user  |           product  |
+--------------------+-------------+--------------------+
|        2016-10-01  |        Tom  |           computer |
+--------------------+-------------+--------------------+
|        2016-10-01  |        Tom  |           iphone   |
+--------------------+-------------+--------------------+
|        2016-10-01  |       Jhon  |             book   |
+--------------------+-------------+--------------------+
|        2016-10-02  |        Tom  |             pen    |
+--------------------+-------------+--------------------+
|        2016-10-02  |       Jhon  |             milk   |
+--------------------+-------------+--------------------+

輸出格式如下:

+-----------+-----------------------+
|     user  |        products       |
+-----------------------------------+
|     Tom   |   computer,iphone,pen |
+-----------------------------------+
|     Jhon  |          book,milk    |  
+-----------------------------------+

輸出顯示每個用戶按日期購買的所有產品。

我想用 Spark 處理這些數據,請問誰能幫幫我? 謝謝。

最好使用 map-reduceBykey() 組合而不是 groupBy .. 還假設數據沒有

#Read the data using val ordersRDD = sc.textFile("/file/path")
val ordersRDD = sc.parallelize( List(("2016-10-01","Tom","computer"), 
    ("2016-10-01","Tom","iphone"), 
    ("2016-10-01","Jhon","book"), 
    ("2016-10-02","Tom","pen"), 
    ("2016-10-02","Jhon","milk")))

#group by (date, user), sort by key & reduce by user & concatenate products
val dtusrGrpRDD = ordersRDD.map(rec => ((rec._2, rec._1), rec._3))
   .sortByKey().map(x=>(x._1._1, x._2))
   .reduceByKey((acc, v) => acc+","+v)

#if needed, make it to DF
scala> dtusrGrpRDD.toDF("user", "product").show()
+----+-------------------+
|user|            product|
+----+-------------------+
| Tom|computer,iphone,pen|
|Jhon|          book,milk|
+----+-------------------+

如果您使用的是 HiveContext(您應該使用):

使用python的示例:

from pyspark.sql.functions import collect_set

df = ... load your df ...
new_df = df.groupBy("user").agg(collect_set("product").alias("products"))

如果您不希望對產品中的結果列表進行重復數據刪除,則可以改用 collect_list。

對於數據幀,它是兩行的:

import org.apache.spark.sql.functions.collect_list
//collect_set nistead of collect_list if you don't want duplicates
val output =  join.groupBy("user").agg(collect_list($"product"))

GroupBy 會給你一個分組的用戶集帖子,你可以在分組的數據集上迭代和 collect_list 或 collect_set 。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM