[英]How to unregister Spark UDF
I use Spark 1.6.0 with Java. 我在Java中使用Spark 1.6.0。
I'd like to unregister a Spark UDF. 我想注销Spark UDF。 Is there a way like dropping a temporary table sqlContext.drop(TemporaryTableName)
? 有没有像删除临时表sqlContext.drop(TemporaryTableName)
?
sqlContext.udf().register("isNumeric", value -> {
if(StringUtils.isNumeric((String)value)) {
return 1;
} else {
return 0;
}
}, DataTypes.IntegerType);
sqlContext.functionRegistry().listFunction().toSet().toString()
I tried to get all functions(including UDF we defined) from current sqlContext, and it works, but is there any way to unregister custom UDF 'isNumeric' 我试图从当前的sqlContext中获取所有函数(包括我们定义的UDF),并且可以使用,但是有什么方法可以注销自定义UDF'isNumeric'
the udf can be unregistered by executing the below SQL. 通过执行以下SQL可以注销udf。
spark.sql("drop temporary function isNumeric")
The below snippet shows creating a UDF and dropping of the UDF. 下面的代码片段显示了创建UDF和删除UDF的过程。
scala> spark.udf.register("test", (value: String) => value.toInt)
res16: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,IntegerType,Some(List(StringType)))
scala> spark.catalog.listFunctions.filter(_.name == "test").collect
res17: Array[org.apache.spark.sql.catalog.Function] = Array(Function[name='test', className='null', isTemporary='true'])
scala> spark.sql("drop temporary function test")
res18: org.apache.spark.sql.DataFrame = []
scala> spark.catalog.listFunctions.filter(_.name == "test").collect
res19: Array[org.apache.spark.sql.catalog.Function] = Array()
Spark 1.6v : Spark 1.6v :
scala> sqlContext.sql("drop temporary function test")
{"level": "INFO ", "timestamp": "2017-06-09 05:43:44,650", "classname": "hive.ql.parse.ParseDriver", "body": "Parsing command: drop temporary function test"}
{"level": "INFO ", "timestamp": "2017-06-09 05:43:44,650", "classname": "hive.ql.parse.ParseDriver", "body": "Parse Completed"}
{"level": "INFO ", "timestamp": "2017-06-09 05:43:44,655", "classname": "hive.ql.parse.ParseDriver", "body": "Parsing command: drop temporary function test"}
{"level": "INFO ", "timestamp": "2017-06-09 05:43:44,656", "classname": "hive.ql.parse.ParseDriver", "body": "Parse Completed"}
res7: org.apache.spark.sql.DataFrame = [result: string]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.