繁体   English   中英

GoolgeBigQuery - 超出速率限制

[英]GoolgeBigQuery - Exceeded rate limits

尝试将数据插入 GoogleBigQuery 时,出现以下错误:

table.write:超出速率限制:此表的表更新操作太多。 有关详细信息,请参阅https://cloud.google.com/bigquery/troubleshooting-errors (错误代码:rateLimitExceeded)

根据文档,我可能超出了以下其中一项

我如何知道我的申请超出了哪些标准?

我已经在网络上探索过其他解决方案,但都没有奏效。

您可以检查的一件事是您的配额页面(导航菜单 -> IAM 和管理 -> 配额),然后在服务下您可以仅选择 BigQuery API 以查看您是否达到任何 BQ API 配额。 如果没有,您很可能会达到“每日目标表更新限制 - 每个表每天 1,000 次更新”

您已达到表更新限制。 这意味着您要提交大量修改表存储的操作(插入、更新或删除) 请记住,这还包括加载作业、DML 或带有目标表的查询。 由于配额会定期补充,您必须等待几分钟才能重试,但请注意您的表更新配额,以免再次出现此错误。

如果您在很多操作而不是少数操作中插入行,请考虑改用流式插入

让我用从队友那里得到的真实案例重现错误:

# create the table
CREATE TABLE temp.bucket_locations
AS 
SELECT 'ASIA-EAST1' bucket_location
UNION ALL SELECT 'ASIA-NORTHEAST2' bucket_location;

#update several times
UPDATE temp.bucket_locations
 SET bucket_location = "US"
 WHERE UPPER(bucket_location) LIKE "US%";
UPDATE temp.bucket_locations
 SET bucket_location = "TW"
 WHERE UPPER(bucket_location) LIKE "ASIA-EAST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "JP"
 WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "HK"
 WHERE UPPER(bucket_location) LIKE "ASIA-EAST2%";
UPDATE temp.bucket_locations
 SET bucket_location = "JP"
 WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST2%";
UPDATE temp.bucket_locations
 SET bucket_location = "KR"
 WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST3%";
UPDATE temp.bucket_locations
 SET bucket_location = "IN"
 WHERE UPPER(bucket_location) LIKE "ASIA-SOUTH1%";
UPDATE temp.bucket_locations
 SET bucket_location = "SG"
 WHERE UPPER(bucket_location) LIKE "ASIA-SOUTHEAST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "AU"
 WHERE UPPER(bucket_location) LIKE "AUSTRALIA%";
UPDATE temp.bucket_locations
 SET bucket_location = "FI"
 WHERE UPPER(bucket_location) LIKE "EUROPE-NORTH1%";
UPDATE temp.bucket_locations
 SET bucket_location = "BE"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "GB"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST2%";
UPDATE temp.bucket_locations
 SET bucket_location = "DE"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST3%";
UPDATE temp.bucket_locations
 SET bucket_location = "NL"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST4%";
UPDATE temp.bucket_locations
 SET bucket_location = "CH"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST6%";
UPDATE temp.bucket_locations
 SET bucket_location = "CA"
 WHERE UPPER(bucket_location) LIKE "NORTHAMERICA%";
UPDATE temp.bucket_locations
 SET bucket_location = "BR"
 WHERE UPPER(bucket_location) LIKE "SOUTHAMERICA%";

超出速率限制:此表的表更新操作过多

这种情况的解决方案是避免进行如此多的更新。 相反,我们只能做一个,将所有映射组合在一起:

CREATE TEMP TABLE `mappings`
AS 
SELECT *
FROM UNNEST(
  [STRUCT('US' AS abbr, 'US%' AS long), ('TW', 'ASIA-EAST1%'), ('JP', 'ASIA-NORTHEAST2%'
  # add mappings
)]);

UPDATE temp.bucket_locations
 SET bucket_location = abbr
 FROM mappings 
 WHERE UPPER(bucket_location) LIKE long

在解决方案方面,使用await bigquery.createJob(jobConfig); 而不是await bigquery.createQueryJob(jobConfig); 前者将作为批处理运行,而后者是交互式查询作业。

作为批处理运行查询将不计入 BigQuery API 限制。

来自 GCP文档

默认情况下,BigQuery 运行交互式查询作业,这意味着查询会尽快执行。 交互式查询计入您的并发速率限制和每日限制。

批量查询不计入您的并发速率限制

我正在运行 MERGE 查询以删除重复数据并使用批处理解决了错误。 我没有看到处理时间有任何明显差异。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM