简体   繁体   中英

Handling serverless and singleNode in same Databricks Cluster Policy

Is it possible to have processIsolation enabled in cluster policy on terraform for Databricks only if serverless is selected ?

If singleNode is selected, I don't want this spark_conf to be used. From my point of view, we can't do that without a manual interaction (removing the line in the conf) or creating two different cluster policy: one for singleNodes and the other for high-concurrency.

The main goal was to have a same cluster policy for either single nodes and serverless.

No, it's impossible to do - you need to have two different cluster policies. Cluster policy is checked against your configured settings, and dropdown with the cluster type is just filing in correct Spark conf settings.

If you want to have common pieces of cluster policy, just follow up the example in documentation , where you have default policy, and then you add more settings by merging with overrides.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM