[英]Pyspark subselect / subquery join using dataframes
I am looking to join to a value based on the closest match below that value.我希望加入一个基于低于该值的最接近匹配的值。 In SQL I can do this quite easily.
在 SQL 中,我可以很容易地做到这一点。 Consider the following data:
考虑以下数据:
tblActuals tbl 实际值
|Date |Temperature:
|09/02/2020 |14.1
|10/02/2020 |15.3
|11/02/2020 |12.2
|12/02/2020 |12.4
|13/02/2020 |12.5
|14/02/2020 |11
|15/02/2020 |14.6
tblCoefficients: tbl系数:
|Metric |Coefficient
|10.5 |0.997825593
|11 |0.997825593
|11.5 |0.997663198
|12 |0.997307614
|12.5 |0.996848773
|13 |0.996468537
|13.5 |0.99638519
|14 |0.996726301
|14.5 |0.997435894
|15 |0.998311153
|15.5 |0.999135509
In SQL I can achieve joining this by using the below:在 SQL 中,我可以通过以下方式实现加入:
Select
a.date,
b.temperature,
(select top 1 b.Coefficient from tblCoefficients b where b.Metric <= a.Temperature order by b.Metric desc) as coefficient
from tblActuals
Is there any way to achieve the same as the above with the data in two pyspark dataframes?有没有办法用两个 pyspark 数据帧中的数据实现与上述相同的效果? I can achieve a similar result in spark SQL but I need the flexibility of the dataframes for the process I am creating in databricks.
我可以在 spark SQL 中获得类似的结果,但我需要数据帧的灵活性来实现我在数据块中创建的过程。
You can do a join and get the coefficient for the maximum (closest) metric:您可以进行连接并获取最大(最接近)指标的系数:
import pyspark.sql.functions as F
result = tblActuals.join(
tblCoefficients,
tblActuals['Temperature'] >= tblCoefficients['Metric']
).groupBy(tblActuals.columns).agg(
F.max(F.struct('Metric', 'Coefficient'))['Coefficient'].alias('coefficient')
)
result.show()
+----------+-----------+-----------+
| Date|Temperature|coefficient|
+----------+-----------+-----------+
|15/02/2020| 14.6|0.997435894|
|12/02/2020| 12.4|0.997307614|
|14/02/2020| 11.0|0.997825593|
|13/02/2020| 12.5|0.996848773|
|11/02/2020| 12.2|0.997307614|
|10/02/2020| 15.3|0.998311153|
|09/02/2020| 14.1|0.996726301|
+----------+-----------+-----------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.