[英]Left Semi Join on Geo-Spatial tables in Spark-SQL & GeoMesa
Problem: 问题:
I have 2 tables (d1 & d2) containing Geo-spatial points. 我有2个表(d1和d2)包含地理空间点。 I want to carry out the following query:
我要执行以下查询:
select * from table 1 where table1.point is within 50km of any point in table2.point
I am using Spark-SQL
with GeoMesa
& Accumulo
to achieve the same. 我正在使用带有
GeoMesa
和Accumulo
Spark-SQL
来实现相同的目的。 (Spark as processing engine, Accumulo as Data Store & GeoMesa for GeoSpatial libraries). (火花作为处理引擎,Accumulo作为数据存储和GeoMesa用于GeoSpatial库)。
The above query is kind of left semi join
but I am not sure on how to achieve it using Spark-SQL
because as far as I have read subqueries can't be used in where clause. 上面的查询是一种
left semi join
Spark-SQL
但是我不确定如何使用Spark-SQL
来实现它,因为就我所读取的子查询而言,无法在where子句中使用。
Was able to achieve this using: 能够使用以下方法实现此目的:
select * from d1 left semi join d2 on st_contains(st_bufferPoint(d1.point, 10000.0), d2.point)
Spark broadcasted d2 & is carrying out joins but it is still taking more time as the size of d1 is 5 billion & d2 is 10 million. Spark广播了d2并正在进行加入,但由于d1的规模为50亿,d2的规模为1000万,因此仍需要花费更多时间。
Not sure though if there is any more efficient way to achieve the same. 不知道是否有任何更有效的方法可以达到相同目的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.