簡體   English   中英

如何將大型數據集從 BigQuery 加載到 R?

[英]How to load large datasets to R from BigQuery?

我已經用Bigrquery包嘗試了兩種方法,這樣

library(bigrquery)
library(DBI)

con <- dbConnect(
  bigrquery::bigquery(),
  project = "YOUR PROJECT ID HERE",
  dataset = "YOUR DATASET"
)
test<- dbGetQuery(con, sql, n = 10000, max_pages = Inf)

sql <- `YOUR LARGE QUERY HERE` #long query saved to View and its select here
tb <- bigrquery::bq_project_query(project, sql)
bq_table_download(tb, max_results = 1000)

但未能解決錯誤"Error: Requested Resource Too Large to Return [responseTooLarge]"這里可能存在相關問題,但我對完成工作的任何工具感興趣:我已經嘗試了此處概述的解決方案但它們失敗了。

如何從 BigQuery 將大型數據集加載到 R?

正如@hrbrmstr 向您建議的那樣, 文檔特別提到:

 > #' @param page_size The number of rows returned per page. Make this smaller > #' if you have many fields or large records and you are seeing a > #' 'responseTooLarge' error.

在 r-project.org 的這份文檔中,您將在此功能的解釋中閱讀不同的建議(第 13 頁)

這將檢索 page_size 塊中的行。 它最適合較小查詢的結果(例如,<100 MB)。 對於較大的查詢,最好將結果導出到存儲在 google cloud 上的 CSV 文件,並使用 bq 命令行工具本地下載。

這對我有用。

# Make page_size some value greater than the default (10000)
x <- 50000

bq_table_download(tb, page_size=x)

請注意,如果您將page_size設置為某個任意高的值(在我的例子中為 100000),您將開始看到很多空行。

對於給定的表大小,正確的page_size值應該是什么,仍然沒有找到好的“經驗法則”。

我看到有人創造了一種使這更容易的方法。 涉及一些設置,但您可以使用 Google Storage API 下載,如下所示

## Auth is done automagically using Application Default Credentials.
## Use the following command once to set it up :
## gcloud auth application-default login --billing-project={project}
library(bigrquerystorage)

# TODO(developer): Set the project_id variable.
# project_id <- 'your-project-id'
#
# The read session is created in this project. This project can be
# different from that which contains the table.

rows <- bqs_table_download(
  x = "bigquery-public-data:usa_names.usa_1910_current"
  , parent = project_id
  # , snapshot_time = Sys.time() # a POSIX time
  , selected_fields = c("name", "number", "state"),
  , row_restriction = 'state = "WA"'
  # , as_tibble = TRUE # FALSE : arrow, TRUE : arrow->as.data.frame
)

sprintf("Got %d unique names in states: %s",
        length(unique(rows$name)),
        paste(unique(rows$state), collapse = " "))

# Replace bigrquery::bq_download_table
library(bigrquery)
rows <- bigrquery::bq_table_download("bigquery-public-data.usa_names.usa_1910_current")
# Downloading 6,122,890 rows in 613 pages.
overload_bq_table_download(project_id)
rows <- bigrquery::bq_table_download("bigquery-public-data.usa_names.usa_1910_current")
# Streamed 6122890 rows in 5980 messages.

我也剛開始使用 BigQuery。 我認為它應該是這樣的。

當前的 bigrquery 版本可以從 CRAN 安裝:

install.packages("bigrquery")

可以從 GitHub 安裝最新的開發版本:

install.packages('devtools')
devtools::install_github("r-dbi/bigrquery")

用法 低級 API

library(bigrquery)
billing <- bq_test_project() # replace this with your project ID 
sql <- "SELECT year, month, day, weight_pounds FROM `publicdata.samples.natality`"

tb <- bq_project_query(billing, sql)
#> Auto-refreshing stale OAuth token.
bq_table_download(tb, max_results = 10)

發展部

library(DBI)

con <- dbConnect(
  bigrquery::bigquery(),
  project = "publicdata",
  dataset = "samples",
  billing = billing
)
con 
#> <BigQueryConnection>
#>   Dataset: publicdata.samples
#>   Billing: bigrquery-examples

dbListTables(con)
#> [1] "github_nested"   "github_timeline" "gsod"            "natality"       
#> [5] "shakespeare"     "trigrams"        "wikipedia"

dbGetQuery(con, sql, n = 10)



library(dplyr)

natality <- tbl(con, "natality")

natality %>%
  select(year, month, day, weight_pounds) %>% 
  head(10) %>%
  collect()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM