[英]Match DataFrame column value against another DataFrame column and count hits
I've two Spark DataFrames.我有两个 Spark DataFrame。 Where df1
contains addresses and df2
streetnames, cities, regions etc.其中df1
包含地址和df2
街道名称、城市、地区等。
df1 = spark.createDataFrame([
["001", "Luc Krier","2363 Ryan Road, Long Lake South Dakota","2363RyanRoad,LongLakeSouthDakota"],
["002", "Jeanny Thorn","2263 Patton Lane Raleigh North Carolina","2263PattonLaneRaleighNorthCarolina"],
["003", "Teddy E Beecher","2839 Hartland Avenue Fond Du Lac Wisconsin","2839HartlandAvenueFondDuLacWisconsin"],
["004", "Philippe Schauss","1 Im Oberdorf Allemagne","1ImOberdorfAllemagne"],
["005", "Meindert I Tholen","Hagedoornweg 138 Amsterdam","Hagedoornweg138Amsterdam"]
]).toDF("id","name","address1", "address2")
df2 = spark.createDataFrame([
["US","Amsterdam"],
["US","SouthDakota"],
["LU","Allemagne"],
["FR","Allemagne"],
["NL","Amsterdam"],
["NL","Rotterdam"],
["US","Wisconsin"],
["AU","Wisconsin"],
["AU","Hartland"]
]).toDF("cc","point")
I want to check if df1['address2'] contains any of the values from df2['point'] and the expected result is (fictitious and not in accordance with the dataframe examples) a new column cc
with values like:我想检查 df1['address2'] 是否包含来自 df2['point'] 的任何值,并且预期结果是(虚构的,不符合 dataframe 示例)一个新列cc
,其值如下:
('US':1)
('US':2)('NL':1)
('US':3)('FR':1)('LU':1)
('NL':1)
returns cc
from df2['cc']
and the number of matches.从df2['cc']
返回cc
和匹配数。 An address can hit on multiple values from df2
.一个地址可以命中来自df2
的多个值。 Sorted by number of matches (highest first)按匹配数排序(最高优先)
You can perform a "conditional" join.您可以执行“条件”连接。 Bet be aware, like @Steven mentioned in his comment, this will create a cross-join.请注意,就像@Steven在他的评论中提到的那样,这将创建一个交叉连接。 Performance wise this will not be your best option.性能方面,这将不是您的最佳选择。 But just know that what you try to achieve is possible when you don't take performance into account.但是要知道,当您不考虑性能时,您尝试实现的目标是可能的。
df_join = df1.join(df2, df1.address2.contains(df2.point), how='left')
result = df_join
.groupBy('id','name','address1', 'cc').count()
.select('id', 'name', 'address1', f.concat(f.lit("'"), f.col("cc"), f.lit("':"), f.col("count")).alias('cc'))
.groupBy('id','name','address1').agg(f.concat_ws("", f.collect_list(f.col("cc"))).alias('cc'))
What may help is that you broadcast df2 (the smallest one).可能有帮助的是您广播 df2 (最小的)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.