繁体   English   中英

库 - google.cloud:对象没有属性“schema_from_json”

[英]Library - google.cloud: Object has no attribute 'schema_from_json'

我正在尝试使用 google.cloud.bigquery.client.Client 库中的“schema_from_json”属性,但它没有找到这个属性,并且在库文档中出现了。

我已经更新了库,但它保持不变。

我的 Python 版本是 3.7

来源: https : //googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.html

from google.cloud import bigquery
dir(bigquery.client.Client)
['SCOPE',
 '_SET_PROJECT',
 '__class__',
 '__delattr__',
 '__dict__',
 '__dir__',
 '__doc__',
 '__eq__',
 '__format__',
 '__ge__',
 '__getattribute__',
 '__getstate__',
 '__gt__',
 '__hash__',
 '__init__',
 '__init_subclass__',
 '__le__',
 '__lt__',
 '__module__',
 '__ne__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 '_call_api',
 '_determine_default',
 '_do_multipart_upload',
 '_do_resumable_upload',
 '_get_query_results',
 '_http',
 '_initiate_resumable_upload',
 'cancel_job',
 'copy_table',
 'create_dataset',
 'create_table',
 'dataset',
 'delete_dataset',
 'delete_table',
 'extract_table',
 'from_service_account_json',
 'get_dataset',
 'get_job',
 'get_service_account_email',
 'get_table',
 'insert_rows',
 'insert_rows_json',
 'job_from_resource',
 'list_datasets',
 'list_jobs',
 'list_partitions',
 'list_projects',
 'list_rows',
 'list_tables',
 'load_table_from_dataframe',
 'load_table_from_file',
 'load_table_from_uri',
 'location',
 'query',
 'update_dataset',
 'update_table']

我从 cloud shell 测试过,它有效。

这里是云 shell 的 pip 依赖项: google-cloud-bigquery 1.18.0

这是我的工作代码

from google.cloud import bigquery
client = bigquery.Client()
dataset_id = 'us_dataset'

dataset_ref = client.dataset(dataset_id)
job_config = bigquery.LoadJobConfig()
# I use from file path version
schema_dict = client.schema_from_json("schemaname")
print(schema_dict)
job_config.schema = schema_dict
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.create_disposition = bigquery.CreateDisposition.CREATE_IF_NEEDED

# The source format defaults to CSV, so the line below is optional.
job_config.source_format = bigquery.SourceFormat.CSV
uri = "gs://MY_BUCKET/name.csv"

load_job = client.load_table_from_uri(
    uri, dataset_ref.table("name"), job_config=job_config
)  # API request
print("Starting job {}".format(load_job.job_id))

load_job.result()  # Waits for table load to complete.
print("Job finished.")

destination_table = client.get_table(dataset_ref.table("name"))
print("Loaded {} rows.".format(destination_table.num_rows))

我用这个命令生成模式文件: bq show --schema us_dataset.name > schemaname

结果在这里

[{"type":"STRING","name":"name","mode":"NULLABLE"},{"type":"STRING","name":"id","mode":"NULLABLE"}]

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM