I am looking to join to a value based on the closest match below that value. In SQL I can do this quite easily. Consider the following data:
tblActuals
|Date |Temperature:
|09/02/2020 |14.1
|10/02/2020 |15.3
|11/02/2020 |12.2
|12/02/2020 |12.4
|13/02/2020 |12.5
|14/02/2020 |11
|15/02/2020 |14.6
tblCoefficients:
|Metric |Coefficient
|10.5 |0.997825593
|11 |0.997825593
|11.5 |0.997663198
|12 |0.997307614
|12.5 |0.996848773
|13 |0.996468537
|13.5 |0.99638519
|14 |0.996726301
|14.5 |0.997435894
|15 |0.998311153
|15.5 |0.999135509
In SQL I can achieve joining this by using the below:
Select
a.date,
b.temperature,
(select top 1 b.Coefficient from tblCoefficients b where b.Metric <= a.Temperature order by b.Metric desc) as coefficient
from tblActuals
Is there any way to achieve the same as the above with the data in two pyspark dataframes? I can achieve a similar result in spark SQL but I need the flexibility of the dataframes for the process I am creating in databricks.
You can do a join and get the coefficient for the maximum (closest) metric:
import pyspark.sql.functions as F
result = tblActuals.join(
tblCoefficients,
tblActuals['Temperature'] >= tblCoefficients['Metric']
).groupBy(tblActuals.columns).agg(
F.max(F.struct('Metric', 'Coefficient'))['Coefficient'].alias('coefficient')
)
result.show()
+----------+-----------+-----------+
| Date|Temperature|coefficient|
+----------+-----------+-----------+
|15/02/2020| 14.6|0.997435894|
|12/02/2020| 12.4|0.997307614|
|14/02/2020| 11.0|0.997825593|
|13/02/2020| 12.5|0.996848773|
|11/02/2020| 12.2|0.997307614|
|10/02/2020| 15.3|0.998311153|
|09/02/2020| 14.1|0.996726301|
+----------+-----------+-----------+
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.