简体   繁体   English

Spark-SQL 使用 withColumnRenamed()

[英]Spark-SQL using withColumnRenamed()

I am trying to load up a Parquet file with columns storyId1 and publisher1.我正在尝试加载包含 storyId1 和 publisher1 列的 Parquet 文件。 I want to find all pairs of publishers that publish articles about the same stories.我想找到所有发布关于相同故事的文章的出版商对。 For each publisher pair need to report the number of co-published stories.对于每个出版商对,需要报告共同出版的故事的数量。 Where a co-published story in a story published by both publishers.两家出版商共同出版的故事中的共同出版故事。 Report the pairs in decreasing order of frequency.按频率降序报告这些对。 The solution must conform to the following rules: 1. There should not be any replicated entries like: NASDAQ, NASDAQ, 1000 2. Should not have the same pair occurring twice in opposite order.解决方案必须符合以下规则: 1. 不应有任何重复的条目,例如:纳斯达克、纳斯达克、1000 2. 不应有同一对以相反的顺序出现两次。 Only one of the following should occur: NASDAQ, Reuters, 1000 Reuters, NASDAQ, 1000 (ie it is incorrect to have both of the above two lines in your result)只应出现以下情况之一:NASDAQ、Reuters、1000 Reuters、NASDAQ、1000(即在您的结果中同时包含上述两行是不正确的)

Now it have tried following code:现在它尝试了以下代码:

> import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions._
import spark.implicits._


val worddocDF = spark.read.parquet("file:///home/user204943816622/t4_story_publishers.parquet")
val worddocDF1 = spark.read.parquet("file:///home/user204943816622/t4_story_publishers.parquet")
worddocDF.cache()
val joinDF = worddocDF.join(worddocDF1, "storyId1").withColumnRenamed("worddocDF.publisher1", "publisher2")
joinDF.filter($"publisher1" !== $"publisher2")

Input format:输入格式:

[ddUyU0VZz0BRneMioxUPQVP6sIxvM, Livemint]

[ddUyU0VZz0BRneMioxUPQVP6sIxvM, IFA Magazine]

[ddUyU0VZz0BRneMioxUPQVP6sIxvM, Moneynews]

[ddUyU0VZz0BRneMioxUPQVP6sIxvM, NASDAQ]

[dPhGU51DcrolUIMxbRm0InaHGA2XM, IFA Magazine]

[ddUyU0VZz0BRneMioxUPQVP6sIxvM, Los Angeles Times]

[dPhGU51DcrolUIMxbRm0InaHGA2XM, NASDAQ]

Required output:所需 output:

[

NASDAQ,IFA Magazine,2]

[Moneynews,Livemint,1]

[Moneynews,IFA Magazine,1]

[NASDAQ,Livemint,1]

[NASDAQ,Los Angeles Times,1]

[Moneynews,Los Angeles Times,1]

[Los Angeles Times,IFA Magazine,1]

[Livemint,IFA Magazine,1]

[NASDAQ,Moneynews,1]

[Los Angeles Times,Livemint,1]
import spark.implicits._

    wordDocDf.as("a")
    .join(
      wordDocDf.as("b"),
      $"a.storyId1" === $"b.storyId1" && $"a.publisher1" =!= $"b.publisher1",
      "inner"
    )
    .select(
      $"a.storyId1".as("storyId"),
      $"a.publisher1".as("publisher1"),
      $"b.publisher1".as("publisher2")
    )

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM