![](/img/trans.png)
[英]Apache beam - Google Dataflow - WriteToBigQuery - Python - Parameters - Templates - Pipelines
[英]How to run multiple WriteToBigQuery parallel in google cloud dataflow / apache beam?
我想从给定数据的多个事件中分离事件
{"type": "A", "k1": "v1"}
{"type": "B", "k2": "v2"}
{"type": "C", "k3": "v3"}
我想分隔type: A
bigquery中的type: A
事件到表A
, type:B
事件到表B
, type: C
事件到表C
这是我通过apache beam
python sdk实现的代码,并将数据写入bigquery
,
A_schema = 'type:string, k1:string'
B_schema = 'type:string, k2:string'
C_schema = 'type:string, k2:string'
class ParseJsonDoFn(beam.DoFn):
A_TYPE = 'tag_A'
B_TYPE = 'tag_B'
C_TYPE = 'tag_C'
def process(self, element):
text_line = element.trip()
data = json.loads(text_line)
if data['type'] == 'A':
yield pvalue.TaggedOutput(self.A_TYPE, data)
elif data['type'] == 'B':
yield pvalue.TaggedOutput(self.B_TYPE, data)
elif data['type'] == 'C':
yield pvalue.TaggedOutput(self.C_TYPE, data)
def run():
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='data/path/data',
help='Input file to process.')
known_args, pipeline_args = parser.parse_known_args(argv)
pipeline_args.extend([
'--runner=DirectRunner',
'--project=project-id',
'--job_name=seperate-bi-events-job',
])
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
lines = p | ReadFromText(known_args.input)
multiple_lines = (
lines
| 'ParseJSON' >> (beam.ParDo(ParseJsonDoFn()).with_outputs(
ParseJsonDoFn.A_TYPE,
ParseJsonDoFn.B_TYPE,
ParseJsonDoFn.C_TYPE)))
a_line = multiple_lines.tag_A
b_line = multiple_lines.tag_B
c_line = multiple_lines.tag_C
(a_line
| "output_a" >> beam.io.WriteToBigQuery(
'temp.a',
schema = A_schema,
write_disposition = beam.io.BigQueryDisposition.WRITE_TRUNCATE,
create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED
))
(b_line
| "output_b" >> beam.io.WriteToBigQuery(
'temp.b',
schema = B_schema,
write_disposition = beam.io.BigQueryDisposition.WRITE_TRUNCATE,
create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED
))
(c_line
| "output_c" >> beam.io.WriteToBigQuery(
'temp.c',
schema = (C_schema),
write_disposition = beam.io.BigQueryDisposition.WRITE_TRUNCATE,
create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED
))
p.run().wait_until_finish()
输出:
INFO:root:start <DoOperation output_banner/WriteToBigQuery output_tags=['out']>
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
WARNING:root:Sleeping for 150 seconds before the write as BigQuery inserts can be routed to deleted table for 2 mins after the delete and create.
INFO:root:start <DoOperation output_banner/WriteToBigQuery output_tags=['out']>
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
WARNING:root:Sleeping for 150 seconds before the write as BigQuery inserts can be routed to deleted table for 2 mins after the delete and create.
INFO:root:start <DoOperation output_banner/WriteToBigQuery output_tags=['out']>
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
WARNING:root:Sleeping for 150 seconds before the write as BigQuery inserts can be routed to deleted table for 2 mins after the delete and create.
但是,这里有两个问题
bigquery
没有数据? 我的代码有什么问题吗?或者我缺少什么?
bigquery中没有数据?
当将数据写入BigQuery时,您的代码似乎非常好( C_schema
应该是k3
而不是k2
)。 请记住,您正在流式传输数据,因此,如果在提交流式缓冲区中的数据之前单击“ Preview
表”按钮,您将看不到它。 运行SELECT *
查询将显示预期结果:
从日志看来,代码不是并行运行,而不是顺序运行3次?
好的,这很有趣。 通过在代码中跟踪WARNING
消息,我们可以阅读以下内容:
# if write_disposition == BigQueryDisposition.WRITE_TRUNCATE we delete
# the table before this point.
if write_disposition == BigQueryDisposition.WRITE_TRUNCATE:
# BigQuery can route data to the old table for 2 mins max so wait
# that much time before creating the table and writing it
logging.warning('Sleeping for 150 seconds before the write as ' +
'BigQuery inserts can be routed to deleted table ' +
'for 2 mins after the delete and create.')
# TODO(BEAM-2673): Remove this sleep by migrating to load api
time.sleep(150)
return created_table
else:
return created_table
在阅读BEAM-2673和BEAM-2801之后 ,看来这是由于BigQuery接收器存在问题,该问题与DirectRunner
一起使用了Streaming API。 重新创建表时,这将导致进程休眠150 s,并且不会并行执行。
相反,如果我们在Dataflow上运行它(使用DataflowRunner
,提供分段和临时存储区路径,以及从GCS加载输入数据),那么它将并行运行三个导入作业。 在下图中看到,所有这三个开始于22:19:45
并结束于22:19:56
:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.