I have 2 dataframe in Spark (Scala) like this :
Customers :
+--------+-----------+----------------+-----------+---------------+---------------+
|id |postal_code|city_name |valeurPrise|latitudeOK |longitudeOK |
+--------+-----------+----------------+-----------+---------------+---------------+
|22318764|94200 |Ivry-sur-Seine |Number |48.815679000000|2.393150000000 |
|983026 |39330 |Mouchard |Street |46.978240000000|5.807290000000 |
|810029 |33260 |La Teste-de-Buch|Street |44.539033000000|-1.152371000000|
|1880521 |77360 |Vaires-sur-Marne|Street |48.877451000000|2.649342000000 |
|19502247|80090 |Amiens |Number |49.871260000000|2.300264000000 |
|17550309|72100 |Le Mans |Number |47.973960000000|0.206240000000 |
|22311804|94250 |Gentilly |Number |48.816344000000|2.340399000000 |
|284138 |14000 |Caen |Street |49.186034000000|-0.353779000000|
|2011904 |83000 |Toulon |Street |43.125340000000|5.930290000000 |
|21922785|92110 |Clichy |Number |48.910761000000|2.307201000000 |
+--------+-----------+----------------+-----------+---------------+---------------+
Shop :
+------+-----------+----------------+---------------+------+
|erd_cd|ville |gps_wgs84_lat |gps_wgs84_lon |active|
+------+-----------+----------------+---------------+------+
|31312 |MAMOUDZOU |-12.780550000000|45.227770000000|VRAI |
|31901 |ST JOSEPH |-21.376620000000|55.616100000000|VRAI |
|31307 |STE MARIE |-20.899934381104|55.517562110882|VRAI |
|31303 |ST BENOIT |-21.043730000000|55.717850000000|VRAI |
|31302 |ST PIERRE |-21.340676722653|55.477203422331|VRAI |
|35023 |STE SUZANNE|-20.929250000000|55.633290000000|VRAI |
|31305 |ST DENIS |-20.880840000000|55.450700000000|VRAI |
|31304 |LE PORT |-20.956710000000|55.308050000000|VRAI |
|32530 |ST PAUL |-21.008640000000|55.271290000000|VRAI |
|19585 |BEAUNE |47.023000000000 |4.837550000000 |VRAI |
+------+-----------+----------------+---------------+------+
The first contains 19 000 000 of rows and the second contains 650 rows.
I want to calculate de distance for each customer with each shop and stock the result in a new column in the customer's dataframe.
Like for instance [23, 47, 125, 8, ...] for the first customer,...
Ideally, I want too keep the "erd_cd" too.
So a tupple is perhaps a good solution. For instance [31312:23, 27654:47,...] will be great.
I know the formula for compute the distance, don't care about this.
My question is "How can I simulate a cross-join and apply a function" ?
I think about a cross-join but it'll generate 19 000 000 000 of rows (perhaps it's a little too much).
Doo you have any idea ?
Thank you very much.
Shop data can be broadcasted as a Map/Set/Seq and used for processing Customer data. It will be a map operation which can run extremely parallel.
val shop = //shop data in Map() or Seq() format, whatever suits your need
val shopB = spark.sparkContext.broadcast(shop).value
val customer = //build the dataframe or dataset
customer.map{ c =>
val distance = aFunction(c, shopB)
(c.id, c.postal_code,..... distance)
}.toDF(<column names>)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.