简体   繁体   中英

Access from gcloud ml-engine jobs to Big Query

I have a python ML process which connects to BigQuery using a local json file which the env variable GOOGLE_APPLICATION_CREDENTIALS is pointing to (The file contains my keys supplied by google, see authentication getting-started )

When Running it locally its works great.

Im now looking to deploy my model through Google's Ml engine , specifically using the shell command gcloud ml-engine jobs submit training .

However, after i ran my process and looked at the logs in console.cloud.google.com/logs/viewer i saw that gcloud cant access Bigquery and i'm getting the following error:

 google.auth.exceptions.DefaultCredentialsError: File:
 /Users/yehoshaphatschellekens/Desktop/google_cloud_xgboost/....-.....json was not found.

Currently i don't think that the gcloud ml-engine jobs submit training takes the Json file with it (I thought that gcloud has access automatically to BigQuery, i guess not)

One optional workaround to this is to save my personal .json into my python dependancies in the other sub-package folder (see packaging-trainer ) and import it.

Is this solution feasible / safe ?

Is there any other workaround to this issue?

the path should be absolute and with backslashes in Windows:

GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\[FILE_NAME].json"

set it this way in your Python code:

os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "C:\PATH.JSON"

Example with the Google Translate API here .

What i did eventually is to upload the json to a gcloud storage bucket and then uploading it into my project each time i launch the ML-engine train process:

os.system('gsutil cp gs://secured_bucket.json .')
os.environ[ "GOOGLE_APPLICATION_CREDENTIALS"] = "......json"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM