简体   繁体   中英

Can I change GCP Dataproc cluster from Standard (1 master, N workers) to High Availability?

I have created a GCP Dataproc cluster with Standard (1 master, N workers). Now I want to upgrade it to High Availability (3 masters, N workers) - Is it possible?

I tried GCP, GCP alpha and GCP beta commands. For example GCP beta documented here: https://cloud.google.com/sdk/gcloud/reference/beta/dataproc/clusters/update .

It has option to scale worker nodes, however does not have option to switch from standard to high availability mode. Am I correct?

您可以通过进入集群下的虚拟机实例部分来升级主节点,停止主虚拟机并编辑配置以

The answer is - no. Once HA cluster is created, it can't be downgraded and vice versa. You can add worker nodes, however master node can't be altered.

You may always upgrade your master node machine type and also add more worker node. While that would improve your cluster job performance but noting to do with HA.

是的,你总是可以这样做的,要改变master节点的机器类型,你首先需要停止master虚拟机实例,然后你可以改变机器类型即使worker节点的机器类型可以改变,我们只需要做的是停止机器并编辑机器配置。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM