简体   繁体   中英

data persistence to original data source

Can anybody tell me about below use case is make sense and applicable to intake software component.

We like to use intake to build an abstraction layer or API service endpoint to encapsulate typical data operations, such as data retrieval and data persistence back to original data systems. In short, to build read() and save() from DB system, such as GCP BigQuery.

Intake offers limited writing capabilities (right now): for each data container type, there is only one format supported, for the "persist" (local storage) and "export" (or upload) functions. Your bigquery data is probably dataframe-like, so this would be parquet. I believe that you can simply save to GCS, and have bigquery run against these files, but I don't know how you go about it.

There is not a general way to write data to any backend which is supported for reading.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM