简体   繁体   中英

Set GOOGLE_APPLICATION_CREDENTIALS securely

I am trying to use Google Cloud Services in a secure way. I am currently providing my credentials via putenv('GOOGLE_APPLICATION_CREDENTIALS= path to JSON file '); See https://cloud.google.com/docs/authentication/production

I am wanting to test publicly by putting my code on a shared host / virtual Linux server. However, if I set permissions of the JSON file to be secure they can't be accessed. If I make the public, the JSON file I need to include has a private key in plain text.

I've done an hours worth of searching, but can't find how I am supposed to provide credentials securely.

  • Am I supposed to have the file on my server in a secure location?
  • Is there an alternative for production websites to passing a public URL in to putenv?

One solution is to put the content of your credential JSON (base64 encoded) into an ENV variable on the server configuration interface, and then we you are starting the server manifest the file from the ENV variable.

Eg: echo $CREDENTIALS | base64 -d > /path/to/cred.json echo $CREDENTIALS | base64 -d > /path/to/cred.json , then export GOOGLE_APPLICATION_CREDENTIALS="*file_json_path"

There is no simple answer to your question.

Part 1 - Your host is not on Google Cloud

Is your host secure? If yes, then it is OK to place the service account JSON key in a directory that is not accessible from your public-facing applications. Here I mean do not place the file in the same directory as your webserver. Use a location such as /config and then tighten the security on that directory so that only authorized users and your application can read the file.

Once you have determined a secure file location for your credentials file, specify that service account directly in your code. Do not use environment variables or command-line switches. Do not use GOOGLE_APPLICATION_CREDENTIALS . Some comments are to use KMS. Using KMS is a good idea, but you have a chicken-or-the-egg situation. You need credentials to use KMS decrypt. If a bad-actor can access your encrypted credentials they can also access your source code, or reverse-engineer the application, to see the decryption method and the service account used for decryption.

Note: Specifying a static location for credential files is not a best-practice for DevOps. Your question is about security and not CI/CD. You are using a shared server, which can mean many things and DevOps is probably not integrated into your deployments or system design.

If your host is not secure , then you have no viable options. Nothing that you can do can prevent a "skilled" engineer from reversing your method of "obscuration".

Part 2 - Your host is on Google Cloud (Compute Engine, Cloud Run, App Engine, etc)

Note: The techniques below are in beta. This is the future for Google Cloud Authorization which is Identity Bases Access Control to compliment Role Based Access Control and in some cases replace it.

You can assign the host a service account that has zero permissions. Note the word "assign" and not "create". No files are involved. Then you can use Identity Based access control (the IAM member account ID of the service account) to access resources. I wrote two articles on this for Google Cloud Run which applies to other Google services (Compute Engine, Cloud Functions, KMS, Cloud Storage, Cloud Scheduler, etc):

You can edit .bashrc file: export GOOGLE_APPLICATION_CREDENTIALS="*file_json_path" then use: source .bashrc

-- linux mint 19

You can use "gcloud auth application-default login" from your terminal to get authenticated using end-user credentials.

Once you are authenticated, you can use default service objects of any GCP resource.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM