[英]Customizing nodes of an Azure Synapse Workspace Spark Cluster
When creating a Spark
cluster within an Azure Synapse workspace
, is there a means to install arbitrary files and directories onto it's cluster nodes
and/or onto the node's underlying distributed filesystem
?在Azure Synapse workspace
中创建Spark
集群时,是否可以将任意文件和目录安装到其cluster nodes
和/或节点的underlying distributed filesystem
上?
By arbitrary files and directories, I literally mean arbitrary files and directories;任意文件和目录,我的字面意思是任意文件和目录; not just extra Python
libraries like demonstrated here .不仅仅是这里演示的额外Python
库。
Databricks
smartly provided a means to do this on it's cluster nodes (described in this document ). Databricks
巧妙地提供了一种在其集群节点上执行此操作的方法(在本文档中进行了描述)。 Now I'm trying to see if there's a means to do the same on an Azure Synapse Workspace Spark Cluster
.现在我想看看是否有办法在Azure Synapse Workspace Spark Cluster
上做同样的事情。
Thank you.谢谢你。
Unfortunately, Azure Synapse Analytics don't support arbitrary binary installs or writing to Spark local storage.不幸的是,Azure Synapse Analytics 不支持任意二进制安装或写入 Spark 本地存储。
I would suggest you to provide feedback on the same:我建议您提供相同的反馈:
https://feedback.azure.com/forums/307516-azure-synapse-analytics https://feedback.azure.com/forums/307516-azure-synapse-analytics
All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.您在这些论坛中分享的所有反馈都将由负责构建 Azure 的 Microsoft 工程团队监控和审查。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.