[英]Spark Scala : Join two Dataframes by near position and time range
我有两个数据框:
具有以下结构的数据框DF1
: (ID, StartDate, EndDate, Position)
数据框DF2
外观如下:( (DateTime, Position)
我想使用这些数据框创建一个新的数据框,其中包含每个DF1(ID)的DF2中的行数,其中DF2(DateTime)在DF1(StartDate)和DF1(EndDate)之间,并且DF2(Position)接近DF1(位置)
我们可以假设我有一个isNearUDF(pos1,pos2)
函数isNearUDF(pos1,pos2)
来完成位置比较工作。
我目前正在尝试通过数据框之间的联接来做到这一点,但这似乎不是正确的解决方案
编辑2:
这是MVCE:
def isInRadius(lat1:Double,lon1:Double,lat2:Double,lon2:Double,dist:Double):Boolean={
val distance = 0// calculate distance between lon/lat positions
return distance<=dist
}
val DF1 = sc.parallelize(Array(
("ID1", "2018-02-27T13:47:59.416+01:00", "2018-03-01T16:02:00.632+01:00", "25.13297154663", "55.13297154663"),
("ID2", "2018-02-25T13:47:59.416+01:00", "2018-02-07T16:02:00.632+01:00", "26.13297154663", "55.13297154663"),
("ID3", "2018-02-24T13:47:59.416+01:00", "2018-02-02T16:02:00.632+01:00", "25.13297154663", "55.13297154663")
// ...
)).toDF("ID", "CreationDate","EndDate","Lat1","Lon1")
val DF2 = sc.parallelize(Array(
("2018-02-27T13:47:59.416+01:00","25.13297154663", "55.13297154663"),
("2018-02-27T13:47:59.416+01:00","25.1304663", "54.10663"),
("2018-02-27T13:47:59.416+01:00","25.1354663", "55.132904663")
// ...
)).toDF("DateTime","Lat2","Lon2")
val isInRadiusUdf = udf(isInRadius _)
val DF3 = DF1.join(DF2,$"DateTime">=$"CreationDate" && $"DateTime"<=$"EndDate" /*&& isInRadiusUdf($"Lat1",$"Lon1",$"Lat2",$"Lon2",lit(10))*/)
display(DF3)
该功能适用于约会,但需要很长时间。 当添加isInRadius条件时,出现错误:
SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.spark.sql.DataFrameReader
尝试将函数定义更改为:
def isInRadius : Double => Double => Double => Double => Double = lat1 => long1 => lat2 => long2 => dist {
val distance = // calculate distance between lon/lat positions
return distance<=dist
}
在尝试了各种可能的解决方案并获得了怪异的结果之后,我终于设法通过简单地重新启动Spark Cluster(Databricks Notebook)解决了我的问题,我完全不知道出了什么问题,但是MVCE的代码现在可以正常工作了。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.