簡體   English   中英

在 Airflow DAG 上創建 Dataproc 集群后,如何使用 PIP 安裝軟件包?

[英]How do you use PIP to install packages upon creation of Dataproc Cluster on Airflow DAG?

我創建了一個這樣寫的 DAG:

from datetime import datetime as dt, timedelta, date

from airflow import models, DAG

from airflow.contrib.operators.dataproc_operator import DataprocClusterCreateOperator, DataProcPySparkOperator, \
    DataprocClusterDeleteOperator

from airflow.contrib.operators.gcs_to_bq import GoogleCloudStorageToBigQueryOperator

from airflow.models import Variable

from airflow.utils.trigger_rule import TriggerRule

current = date.today()
yesterday = str(current - timedelta(days=1))

BUCKET = "gs://r_etl"

PYSPARK_JOB = BUCKET + "/spark_job/reddit-spark.py"

REDDIT_JOB = BUCKET + "/reddit_job/reddit_daily_load.py"

# The above two variables are examples of env variables that can be extracted by Variable

DEFAULT_DAG_ARGS = \
    {
    'owner': "airflow",
    'depends_on_past': False,
    "start_date": dt(2020, 6, 19),
    "email_on_retry": False,
    "email_on_failure": False,
    "retries": 1,
    "retry_delay": timedelta(minutes=5),
    "project_id": "reddit-etl"
    }

with DAG("reddit_submission_etl", default_args=DEFAULT_DAG_ARGS, catchup=False, schedule_interval="0 0 * * *") as dag:

    create_cluster = DataprocClusterCreateOperator(
        task_id ="create_dataproc_cluster",
        cluster_name="ephemeral-spark-cluster-{{ds_nodash}}",
        master_machine_type="n1-standard-1",
        worker_machine_type="n1-standard-1",
        num_workers=2,
        region="us-east1",
        zone="us-east1-b",
        metadata='PIP_PACKAGES=pandas praw google-cloud-storage',
        init_actions_uris='gs://goog-dataproc-initialization-actions-us-east1/python/pip-install.sh'
    )

    submit_reddit = DataProcPySparkOperator(
        task_id="run_reddit_etl",
        main=REDDIT_JOB,
        cluster_name="ephemeral-spark-cluster-{{ds_nodash}}",
        region="us-east1"
    )

    bq_load_submissions = GoogleCloudStorageToBigQueryOperator(
        task_id="bq_load_submissions",
        bucket="r_etl",
        source_objects=["submissions_store/" + yesterday + "*"],
        destination_project_dataset_table="reddit-etl.data_analysis.submissions",
        autodetect=True,
        source_format="NEWLINE_DELIMITED_JSON",
        create_disposition="CREATE_IF_NEEDED",
        skip_leading_rows=0,
        write_disposition="WRITE_APPEND",
        max_bad_records=0
    )

    submit_pyspark = DataProcPySparkOperator(
        task_id="run_pyspark_etl",
        main=PYSPARK_JOB,
        cluster_name="ephemeral-spark-cluster-{{ds_nodash}}",
        region="us-east1"
    )

    bq_load_analysis = GoogleCloudStorageToBigQueryOperator(
        task_id="bq_load_analysis",
        bucket="r_etl",
        source_objects=["spark_results/" + yesterday + "_calculations/part-*"],
        destination_project_dataset_table="reddit-etl.data_analysis.submission_analysis",
        autodetect=True,
        source_format="NEWLINE_DELIMITED_JSON",
        create_disposition="CREATE_IF_NEEDED",
        skip_leading_rows=0,
        write_disposition="WRITE_APPEND",
        max_bad_records=0
    )

    delete_cluster = DataprocClusterDeleteOperator(
        task_id="delete_dataproc_cluster",
        cluster_name="ephemeral-spark-cluster-{{ds_nodash}}",
        region="us-east1",
        trigger_rule=TriggerRule.ALL_DONE
    )

    create_cluster.dag = dag

    create_cluster.set_downstream(submit_reddit)

    submit_reddit.set_downstream(bq_load_submissions)

    bq_load_submissions.set_downstream(submit_pyspark)

    submit_pyspark.set_downstream(bq_load_analysis)

    bq_load_analysis.set_downstream(delete_cluster)

當我將此 DAG 放入 Airflow 並嘗試運行它時,它會在日志中返回此錯誤:

[2020-06-19 08:51:40,397] {taskinstance.py:1059} ERROR - <HttpError 400 when requesting https://dataproc.googleapis.com/v1beta2/projects/reddit-etl/regions/us-east1/clusters?requestId=42674e7e-6b08-4829-9d7f-193a04e29888&alt=json returned "Invalid value at 'cluster.config.gce_cluster_config.metadata' (type.googleapis.com/google.cloud.dataproc.v1beta2.GceClusterConfig.MetadataEntry), "PIP_PACKAGES=pandas praw google-cloud-storage"">

所以對於我的項目,我需要安裝pandas、PRAW和google雲存儲,才能正常運行創建集群后的第一個任務。 我事先創建了一個不同的集群,它安裝了軟件包並通過它運行了一個工作流模板,這實際上是有效的:

REGION="us-east1"
gcloud dataproc clusters create spark-dwh \
 --scopes=default \
 --region "us-east1" --zone "us-east1-b" \
 --master-machine-type n1-standard-2 \
 --master-boot-disk-size 200 \
  --num-workers 2 \
--worker-machine-type n1-standard-2 \
--worker-boot-disk-size 200 \
--metadata 'PIP_PACKAGES=pandas praw google-cloud-storage' \
--initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/python/pip-install.sh \
--image-version 1.4

這就是我基於 DAG 的初始化操作和元數據的基礎。 是否可以通過這種方式安裝 python 庫,還是我需要使用已經存在的集群?

使用DataprocClusterCreateOperatormetadata采用dict而不是str ,因此您應該更改以下行:

metadata='PIP_PACKAGES=pandas praw google-cloud-storage',

metadata={'PIP_PACKAGES':'pandas praw google-cloud-storage'}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM