简体   繁体   English

将数据加载到BigQuery表中的最佳做法是什么?

[英]What is the best practice for loading data into BigQuery table?

Currently I'm loading data from Google Storage to stage_table_orders using WRITE_APPEND . 目前,我正在使用WRITE_APPEND将数据从Google Storage stage_table_orders加载到stage_table_orders中。 Since this load both new and existed order there could be a case where same order has more than one version the field etl_timestamp tells which row is the most updated one. 由于此订单同时加载新订单和现有订单,因此可能会出现同一订单具有多个版本的情况,字段etl_timestamp告诉哪一行是最新的。

then I WRITE_TRUNCATE my production_table_orders with query like: 然后我用以下查询WRITE_TRUNCATE我的production_table_orders

select ...
from (
    SELECT  * , ROW_NUMBER() OVER
    (PARTITION BY date_purchased, orderid order by etl_timestamp DESC) as rn 
    FROM `warehouse.stage_table_orders` )
where rn=1

Then the production_table_orders always contains the most updated version of each order. 然后production_table_orders始终包含每个订单的最新版本。

This process is suppose to run every 3 minutes . 假定此过程每3分钟运行一次

I'm wondering if this is the best practice. 我想知道这是否是最佳做法。 I have around 20M rows. 我大约有2000万行。 It seems not smart to WRITE_TRUNCATE 20M rows every 3 minutes. 每3分钟WRITE_TRUNCATE 20M行似乎WRITE_TRUNCATE

Suggestion? 建议?

We are doing the same. 我们也一样。 To help improve performance though, try to partition the table by date_purchased and cluster by orderid . 但是,为了帮助提高性能,请尝试按date_purchased对表进行分区, date_purchased orderid集群进行分区。 Use a CTAS statement (to the table itself) as you cannot add partition after fact. 使用CTAS语句(对表本身),因为事后您无法添加分区。

EDIT: use 2 tables and MERGE 编辑:使用2表和合并

Depending on your particular use case ie the number of fields that could be updated between old and new, you could use 2 tables, eg stage_table_orders for the imported records and final_table_orders as destination table and do a MERGE like so: 根据您的特定用例,即可以在新旧之间更新的字段数,您可以使用2个表,例如, stage_table_orders用于导入的记录,而final_table_orders作为目标表,并执行MERGE如下所示:

MERGE final_table_orders F
USING stage_table_orders S
ON F.orderid = S.orderid AND
   F.date_purchased = S.date_purchased
WHEN MATCHED THEN
  UPDATE SET field_that_change = S.field_that_change
WHEN NOT MATCHED THEN
  INSERT (field1, field2, ...) VALUES(S.field1, S.field2, ...)    

Pro : efficient if few rows are "upserted", not millions (although not tested) + pruning partitions should work. Pro :如果“插入”了几行,效率很高,但不是数百万(尽管未经测试)+修剪分区应该可以工作。

Con : you have to explicitly list the fields in the update and insert clauses. 缺点 :您必须在update和insert子句中显式列出字段。 A one-time effort if schema is pretty much fixed. 如果架构几乎是固定的,则需要进行一次努力。

There are may ways to de-duplicate and there is no one-size-fits-all. 有可能要进行重复数据删除的方法,并且没有一刀切的功能。 Search in SO for similar requests using ARRAY_AGG , or EXISTS with DELETE or UNION ALL ,... Try them out and see which performs better for YOUR dataset. 使用ARRAY_AGG在SO中搜索类似的请求,或者使用DELETEUNION ALL EXISTS搜索...,尝试一下,看看哪种方法对您的数据集效果更好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 将 BigQuery 数据传输到 SFTP 的最佳方式是什么? - What is the best way to transfer BigQuery data to SFTP? 从外部表读取与在 Bigquery 中加载数据和读取数据 - Reading from External Table vs Loading data and reading from it in Bigquery 将数据从 BigQuery 表加载到 Dataproc 集群时出错 - Error while loading data from BigQuery table to Dataproc cluster 使用云功能将数据加载到BigQuery表中时出现问题 - Issues loading data into a BigQuery table using a cloud function 读取BigQuery表的最佳方法 - Best way to read BigQuery Table 将数据插入BigQuery表 - Insert data into BigQuery table 将 BigQuery 与 Power BI 连接 - 最佳实践和经济高效的方式 - Connect BigQuery with Power BI- Best Practice and Cost Effective way 在 BigQuery 之前使用 Google Cloud Platform 转换 JSON,最佳实践? - Transforming JSONs with Google Cloud Platform before BigQuery, best practice? 将数据直接加载到Google BigQuery与首先通过云存储加载数据的优缺点是什么? - What are the pros and cons of loading data directly into Google BigQuery vs going through Cloud Storage first? 为什么加载BigQuery表需要存储桶? - Why does loading BigQuery table requires a bucket?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM