[英]Calling Databricks Python notebook in Azure function
I have a python Databricks notebook(pyspark) which does an aggregation based on the inputs provided to the notebook via parameters.我有一个 python Databricks 笔记本(pyspark),它根据通过参数提供给笔记本的输入进行聚合。
Thank you.谢谢你。
Yes, it's possible to do that by using Databricks Jobs REST API .是的,可以通过使用Databricks Jobs REST API来做到这一点。 There are two ways of starting a job with notebook:
有两种使用笔记本开始工作的方法:
I personally would prefer 1st variant as it hides the things like cluster configuration, etc. from the Azure function, as job specification is done on Databricks.我个人更喜欢第一种变体,因为它从 Azure 函数中隐藏了集群配置等内容,因为作业规范是在 Databricks 上完成的。
In both cases, the result of REST API call is the job run ID, that then could be used to check the status of the job run , and to retrieve the output of the job .在这两种情况下,REST API 调用的结果都是作业运行 ID,然后可用于检查作业运行的状态,并检索作业的输出。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.