简体   繁体   中英

Spark how to use a UDF with a Join

I'd like to use a specific UDF with using Spark

Here's the plan:

I have a table A (10 million rows) and a table B (15 millions rows)

I'd like to use an UDF comparing one element of the table A and one of the table B Is it possible

Here's aa sample of my code. At some point i also need to say that my UDF compare must be greater than 0,9 :

DataFrame dfr = df
                .select("name", "firstname", "adress1", "city1","compare(adress1,adress2)")
                .join(dfa,df.col("adress1").equalTo(dfa.col("adress2"))
                        .and((df.col("city1").equalTo(dfa.col("city2"))
                                ...;

Is it possible ?

Yes, you can. However it will be slower than normal operators, as Spark will be not able to do predicate pushdown

Example:

val udf = udf((x : String, y : String) => { here compute similarity; });
val df3 = df1.join(df2, udf(df1.field1, df2.field1) > 0.9)

For example:

val df1 = Seq (1, 2, 3, 4).toDF("x")
val df2 = Seq(1, 3, 7, 11).toDF("q")
val udf = org.apache.spark.sql.functions.udf((x : Int, q : Int) => { Math.abs(x - q); });
val df3 = df1.join(df2, udf(df1("x"), df2("q")) > 1)

You can also directly return boolean from User Defined Function

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM