[英]How to read BigQuery table using python pipeline code in GCP Dataflow
有人可以分享语法来读取/写入在python中为GCP Dataflow编写的管道中的bigquery表
Run on Dataflow 在Dataflow上运行
First, construct a Pipeline
with the following options for it to run on GCP DataFlow: 首先,使用以下选项构造一个Pipeline
,以便在GCP DataFlow上运行:
import apache_beam as beam
options = {'project': <project>,
'runner': 'DataflowRunner',
'region': <region>,
'setup_file': <setup.py file>}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)
Read from BigQuery 从BigQuery读取
Define a BigQuerySource
with your query and use beam.io.Read
to read data from BQ: 使用您的查询定义BigQuerySource
并使用beam.io.Read
从BQ读取数据:
BQ_source = beam.io.BigQuerySource(query = <query>)
BQ_data = pipeline | beam.io.Read(BQ_source)
Write to BigQuery 写信给BigQuery
There are two options to write to bigquery: 写入bigquery有两种选择:
use a BigQuerySink
and beam.io.Write
: 使用BigQuerySink
和beam.io.Write
:
BQ_sink = beam.io.BigQuerySink(<table>, dataset=<dataset>, project=<project>) BQ_data | beam.io.Write(BQ_sink)
use beam.io.WriteToBigQuery
: 使用beam.io.WriteToBigQuery
:
BQ_data | beam.io.WriteToBigQuery(<table>, dataset=<dataset>, project=<project>)
Reading from Bigquery 从Bigquery读书
rows = (p | 'ReadFromBQ' >> beam.io.Read(beam.io.BigQuerySource(query=QUERY, use_standard_sql=True))
writing to Bigquery 写信给Bigquery
rows | 'writeToBQ' >> beam.io.Write(
beam.io.BigQuerySink('{}:{}.{}'.format(PROJECT, BQ_DATASET_ID, BQ_TEST), schema='CONVERSATION:STRING, LEAD_ID:INTEGER', create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.