繁体   English   中英

无法创建 Dataproc 集群

[英]Cannot Create a Dataproc cluster

我尝试通过 Airflow 和 Google Cloud UI 创建 Dataproc 集群,但集群创建最终总是失败。 以下是我用来创建集群的气流代码 -

# STEP 1: Libraries needed
from datetime import timedelta, datetime
from airflow import models
from airflow.operators.bash_operator import BashOperator
from airflow.contrib.operators import dataproc_operator
from airflow.utils import trigger_rule
from poc.utils.transform import main
from airflow.contrib.hooks.gcp_dataproc_hook import DataProcHook
from airflow.operators.python_operator import BranchPythonOperator

import os

YESTERDAY = datetime.combine(
    datetime.today() - timedelta(1),
    datetime.min.time())
project_name = os.environ['GCP_PROJECT']

# Can pull in spark code from a gcs bucket
# SPARK_CODE = ('gs://us-central1-cl-composer-tes-fa29d311-bucket/spark_files/transformation.py')
dataproc_job_name = 'spark_job_dataproc'

default_dag_args = {
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'start_date': YESTERDAY,
'retry_delay': timedelta(minutes=5),
'project_id': project_name,
'owner': 'DataProc',
}

with models.DAG(
'dataproc-poc',
description='Dag to run a simple dataproc job',
schedule_interval=timedelta(days=1),
default_args=default_dag_args) as dag:

    CLUSTER_NAME = 'dataproc-cluster'
    def ensure_cluster_exists(ds, **kwargs):
        cluster = DataProcHook().get_conn().projects().regions().clusters().get(
            projectId=project_name,
            region='us-east1',
            clusterName=CLUSTER_NAME
        ).execute(num_retries=5)
        print(cluster)
        if cluster is None or len(cluster) == 0 or 'clusterName' not in cluster:
            return 'create_dataproc'
        else:
            return 'run_spark'

    # start = BranchPythonOperator(
    #     task_id='start',
    #     provide_context=True,
    #     python_callable=ensure_cluster_exists,
    # )

    print_date = BashOperator(
    task_id='print_date',
    bash_command='date'
    )

    create_dataproc = dataproc_operator.DataprocClusterCreateOperator(task_id='create_dataproc',
    cluster_name=CLUSTER_NAME,
    num_workers=2,
    use_if_exists='true',
    zone='us-east1-b',
    master_machine_type='n1-standard-1',
    worker_machine_type='n1-standard-1')
    
    # Run the PySpark job
    run_spark = dataproc_operator.DataProcPySparkOperator(
    task_id='run_spark',
    main=main,
    cluster_name=CLUSTER_NAME,
    job_name=dataproc_job_name
    )
    # dataproc_operator
    # Delete Cloud Dataproc cluster.
    # delete_dataproc = dataproc_operator.DataprocClusterDeleteOperator(
    # task_id='delete_dataproc',
    # cluster_name='dataproc-cluster-demo-{{ ds_nodash }}',
    # trigger_rule=trigger_rule.TriggerRule.ALL_DONE)
    # STEP 6: Set DAGs dependencies
    # Each task should run after have finished the task before.
    print_date >> create_dataproc >> run_spark
    # print_date >> start >> create_dataproc >> run_spark
    # start >> run_spark

我检查了集群日志并看到以下错误 -

  1. 无法存储主密钥 1
  2. 无法存储主密钥 2
  3. 初始化失败。 退出 125 以防止重新启动
  4. 无法启动主节点:等待 2 个数据节点和节点管理器超时。 操作超时:2 个最低要求的数据节点中只有 0 个正在运行。 操作超时:2 个最低要求的节点管理器中只有 0 个正在运行。

Cannot start master: Timed out waiting for 2 datanodes and nodemanagers. Operation timed out: Only 0 out of 2 minimum required datanodes running. Operation timed out: Only 0 out of 2 minimum required node managers running.

此错误表明工作节点无法与主节点通信。 当工作节点无法在给定的时间范围内向主节点报告时,集群创建失败。

请检查您是否设置了正确的防火墙规则以允许虚拟机之间的通信。

您可以参考以下网络配置最佳实践: https : //cloud.google.com/dataproc/docs/concepts/configuring-clusters/network#overview

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM