[英]Can I change GCP Dataproc cluster from Standard (1 master, N workers) to High Availability?
I have created a GCP Dataproc cluster with Standard (1 master, N workers).我创建了一个标准的 GCP Dataproc 集群(1 个主节点,N 个工作节点)。 Now I want to upgrade it to High Availability (3 masters, N workers) - Is it possible?
现在我想将其升级为高可用性(3 个主节点,N 个工作节点)- 可以吗?
I tried GCP, GCP alpha and GCP beta commands.我尝试了 GCP、GCP alpha 和 GCP beta 命令。 For example GCP beta documented here: https://cloud.google.com/sdk/gcloud/reference/beta/dataproc/clusters/update .
例如,此处记录的 GCP 测试版: https : //cloud.google.com/sdk/gcloud/reference/beta/dataproc/clusters/update 。
It has option to scale worker nodes, however does not have option to switch from standard to high availability mode.它可以选择扩展工作节点,但不能选择从标准模式切换到高可用性模式。 Am I correct?
我对么?
您可以通过进入集群下的虚拟机实例部分来升级主节点,停止主虚拟机并编辑配置以
The answer is - no.答案是不。 Once HA cluster is created, it can't be downgraded and vice versa.
HA 集群一旦创建,就不能降级,反之亦然。 You can add worker nodes, however master node can't be altered.
您可以添加工作节点,但不能更改主节点。
You may always upgrade your master node machine type and also add more worker node.您可以随时升级主节点机器类型并添加更多工作节点。 While that would improve your cluster job performance but noting to do with HA.
虽然这会提高您的集群作业性能,但与 HA 无关。
是的,你总是可以这样做的,要改变master节点的机器类型,你首先需要停止master虚拟机实例,然后你可以改变机器类型即使worker节点的机器类型可以改变,我们只需要做的是停止机器并编辑机器配置。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.