简体   繁体   English

我可以将 GCP Dataproc 集群从标准(1 个主节点,N 个工作节点)更改为高可用性吗?

[英]Can I change GCP Dataproc cluster from Standard (1 master, N workers) to High Availability?

I have created a GCP Dataproc cluster with Standard (1 master, N workers).我创建了一个标准的 GCP Dataproc 集群(1 个主节点,N 个工作节点)。 Now I want to upgrade it to High Availability (3 masters, N workers) - Is it possible?现在我想将其升级为高可用性(3 个主节点,N 个工作节点)- 可以吗?

I tried GCP, GCP alpha and GCP beta commands.我尝试了 GCP、GCP alpha 和 GCP beta 命令。 For example GCP beta documented here: https://cloud.google.com/sdk/gcloud/reference/beta/dataproc/clusters/update .例如,此处记录的 GCP 测试版: https : //cloud.google.com/sdk/gcloud/reference/beta/dataproc/clusters/update

It has option to scale worker nodes, however does not have option to switch from standard to high availability mode.它可以选择扩展工作节点,但不能选择从标准模式切换到高可用性模式。 Am I correct?我对么?

您可以通过进入集群下的虚拟机实例部分来升级主节点,停止主虚拟机并编辑配置以

The answer is - no.答案是不。 Once HA cluster is created, it can't be downgraded and vice versa. HA 集群一旦创建,就不能降级,反之亦然。 You can add worker nodes, however master node can't be altered.您可以添加工作节点,但不能更改主节点。

You may always upgrade your master node machine type and also add more worker node.您可以随时升级主节点机器类型并添加更多工作节点。 While that would improve your cluster job performance but noting to do with HA.虽然这会提高您的集群作业性能,但与 HA 无关。

是的,你总是可以这样做的,要改变master节点的机器类型,你首先需要停止master虚拟机实例,然后你可以改变机器类型即使worker节点的机器类型可以改变,我们只需要做的是停止机器并编辑机器配置。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 GCP Dataproc 集群上的工作流调度 - Workflow scheduling on GCP Dataproc cluster 如何衡量 Google Cloud Dataproc 中的高可用性 - How to measure High Availability in Google cloud Dataproc 如何恢复 dataproc 集群中已删除的主节点? - How to recover deleted Master Node in dataproc cluster? GCP dataproc 集群 hadoop 作业将数据从 gs 存储桶移动到 s3 亚马逊存储桶失败 [控制台] - GCP dataproc cluster hadoop job to move data from gs bucket to s3 amazon bucket fails [CONSOLE] 高可用性在 Hadoop 集群中不起作用 - High-Availability not working in Hadoop cluster 我们可以在 Dataproc 上创建一个 HDFS 为 0%-2% 的 Hadoop 集群吗? - Can we create a Hadoop Cluster on Dataproc with 0%-2% of HDFS? 运行具有高可用性的Hadoop群集的最低系统要求 - Minimum system requirements for running a Hadoop Cluster with High Availability 当没有作业正在运行时,是否可以让 Dataproc 集群自动缩减到 0 个工作器? - Is it possible to have a Dataproc cluster auto scale down to 0 workers when no jobs are running? HUE 与 Dataproc 集群的集成 - HUE integration with Dataproc cluster 将数据从 BigQuery 表加载到 Dataproc 集群时出错 - Error while loading data from BigQuery table to Dataproc cluster
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM