[英]How to correctly join two dataframes in Spark
鑒於這些數據集:
產品元數據DF
{'asin': '0006428320', 'title': 'Six Sonatas For Two Flutes Or Violins, Volume 2 (#4-6)', 'price': 17.95, 'imUrl': 'http://ecx.images-amazon.com/images/I/41EpRmh8MEL._SY300_.jpg', 'salesRank': {'Musical Instruments': 207315}, 'categories': [['Musical Instruments', 'Instrument Accessories', 'General Accessories', 'Sheet Music Folders']]}
產品評級DF
{"reviewerID": "AORCXT2CLTQFR", "asin": "0006428320", "reviewerName": "Justo Roteta", "helpful": [0, 0], "overall": 4.0, "summary": "Not a classic but still a good album from Yellowman.", "unixReviewTime": 1383436800, "reviewTime": "11 3, 2013"}
和這個功能:
def findProductFeatures(productsRatingsDF : DataFrame, productsMetadataDF : DataFrame) : DataFrame = {
productsRatingsDF
.withColumn("averageRating", avg("overall"))
.join(productsMetadataDF,"asin")
.select($"asin", $"categories", $"price", $"averageRating")
}
基於 asin,這是連接這兩個數據集的正確方法嗎?
這是我得到的錯誤:
Exception in thread "main" org.apache.spark.sql.AnalysisException: grouping expressions sequence is empty, and '`asin`' is not an aggregate function. Wrap '(avg(`overall`) AS `averageRating`)' in windowing function(s) or wrap '`asin`' in first() (or first_value) if you don't care which value you get.;;
Aggregate [asin#6, helpful#7, overall#8, reviewText#9, reviewTime#10, reviewerID#11, reviewerName#12, summary#13, unixReviewTime#14L, avg(overall#8) AS averageRating#99]
+- Relation[asin#6,helpful#7,overall#8,reviewText#9,reviewTime#10,reviewerID#11,reviewerName#12,summary#13,unixReviewTime#14L] json
我理解的錯誤正確嗎,我加入的方式有問題嗎? 我嘗試更改 .withColumn 和 .join 的順序,但是沒有用。 當我嘗試根據 asin 編號將 avg("overall") 輸入到列中時,似乎也出現了錯誤。
最終結果應該是,我得到一個包含 4 列“asin”、“categories”、“price”和“averageRating”的數據框。
問題似乎是:
.withColumn("averageRating", avg("overall"))
在加入之前做一個適當的聚合:
df
.groupBy("asin") // your columns
.agg(avg("overall").as("averageRating"))
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.