[英]Using rlike with list to create new df scala
just started with scala 2 days ago.两天前刚从 scala 开始。
Here's the thing, I have a df and a list.事情是这样的,我有一个 df 和一个列表。 The df contains two columns: paragraphs and authors, the list contains words (strings). df 包含两列:段落和作者,列表包含单词(字符串)。 I need to get the count of all the paragraphs where every word on list appears by author.我需要计算作者出现列表中每个单词的所有段落的计数。
So far my idea was to create a for loop on the list to query the df using rlike and create a new df, but even if this does work, I wouldn't know how to do it.到目前为止,我的想法是在列表上创建一个 for 循环以使用 rlike 查询 df 并创建一个新的 df,但即使这确实有效,我也不知道该怎么做。 Any help is appreciated!任何帮助表示赞赏!
Edit: Adding example data and expected output编辑:添加示例数据和预期的 output
// Example df and list
val df = Seq(("auth1", "some text word1"), ("auth2","some text word2"),("auth3", "more text word1").toDF("a","t")
df.show
+-------+---------------+
| a| t|
+-------+---------------+
|auth1 |some text word1|
|auth2 |some text word2|
|auth1 |more text word1|
+-------+---------------+
val list = List("word1", "word2")
// Expected output
newDF.show
+-------+-----+----------+
| word| a|text count|
+-------+-----+----------+
|word1 |auth1| 2|
|word2 |auth2| 1|
+-------+-----+----------+
You can do a filter and aggregation for each word in the list, and combine all the resulting dataframes using unionAll
:您可以对列表中的每个单词进行过滤和聚合,并使用unionAll
组合所有生成的数据帧:
val result = list.map(word =>
df.filter(df("t").rlike(s"\\b${word}\\b"))
.groupBy("a")
.agg(lit(word).as("word"), count(lit(1)).as("text count"))
).reduce(_ unionAll _)
result.show
+-----+-----+----------+
| a| word|text count|
+-----+-----+----------+
|auth3|word1| 1|
|auth1|word1| 1|
|auth2|word2| 1|
+-----+-----+----------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.