简体   繁体   English

如何在PySpark中使用Scala UDF?

[英]How to use Scala UDF in PySpark?

I want to be able to use a Scala function as a UDF in PySpark 我希望能够在PySpark中将Scala函数用作UDF

package com.test

object ScalaPySparkUDFs extends Serializable {
    def testFunction1(x: Int): Int = { x * 2 }
    def testUDFFunction1 = udf { x: Int => testFunction1(x) }
} 

I can access testFunction1 in PySpark and have it return values: 我可以在PySpark中访问testFunction1并让它返回值:

functions = sc._jvm.com.test.ScalaPySparkUDFs 
functions.testFunction1(10)

What I want to be able to do is use this function as a UDF, ideally in a withColumn call: 我希望能够做的是将此函数用作UDF,理想情况是在withColumn调用中:

row = Row("Value")
numbers = sc.parallelize([1,2,3,4]).map(row).toDF()
numbers.withColumn("Result", testUDFFunction1(numbers['Value']))

I think a promising approach is as found here: Spark: How to map Python with Scala or Java User Defined Functions? 我认为这里有一个很有前景的方法: Spark:如何使用Scala或Java用户定义函数映射Python?

However, when making the changes to code found there to use testUDFFunction1 instead: 但是,当对代码进行更改时,使用testUDFFunction1代替:

def udf_test(col):
    sc = SparkContext._active_spark_context
    _f = sc._jvm.com.test.ScalaPySparkUDFs.testUDFFunction1.apply
    return Column(_f(_to_seq(sc, [col], _to_java_column)))

I get: 我明白了:

 AttributeError: 'JavaMember' object has no attribute 'apply' 

I don't understand this because I believe testUDFFunction1 does have an apply method? 我不明白这一点,因为我相信testUDFFunction1确实有一个apply方法?

I do not want to use expressions of the type found here: Register UDF to SqlContext from Scala to use in PySpark 我不想使用此处找到的类型的表达式: 从Scala注册UDF到SqlContext以在PySpark中使用

Any suggestions as to how to make this work would be appreciated! 任何有关如何使这项工作的建议将不胜感激!

The question you've linked is using a Scala object . 您链接的问题是使用Scala object Scala object is a singleton and you can use apply method directly. Scala object是单例,您可以直接使用apply方法。

Here you use a nullary function which returns an object of UserDefinedFunction class co you have to call the function first: 在这里你使用一个nullary函数返回UserDefinedFunction类的对象,你必须先调用该函数:

_f = sc._jvm.com.test.ScalaPySparkUDFs.testUDFFunction1() # Note () at the end
Column(_f.apply(_to_seq(sc, [col], _to_java_column)))

Agree with @user6910411, you have to call apply method directly on the function. 同意@ user6910411,你必须直接在函数上调用apply方法。 So, your code will be. 所以,你的代码将是。

UDF in Scala: Scala中的UDF:

import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.functions._


object ScalaPySparkUDFs {

    def testFunction1(x: Int): Int = { x * 2 }

    def getFun(): UserDefinedFunction = udf(testFunction1 _ )
}

PySpark code: PySpark代码:

def test_udf(col):
    sc = spark.sparkContext
    _test_udf = sc._jvm.com.test.ScalaPySparkUDFs.getFun()
    return Column(_test_udf.apply(_to_seq(sc, [col], _to_java_column)))


row = Row("Value")
numbers = sc.parallelize([1,2,3,4]).map(row).toDF()
numbers.withColumn("Result", test_udf(numbers['Value']))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM