[英]Use an IAM service account with official google/cloud-sdk image
I have a bash script that uses the gcloud
command-line tool to perform maintenance operations. 我有一个使用gcloud
命令行工具执行维护操作的bash脚本。
This script works fine. 这个脚本工作正常。
This script is in a docker image based on google/cloud-sdk
, executed automatically directly through the container entrypoint. 该脚本位于基于google/cloud-sdk
的docker映像中,直接通过容器入口点自动执行。
The goal is to have it executed periodically through a Kubernetes CronJob . 目标是通过Kubernetes CronJob定期执行它。 This works too. 这也可以。
I have currently not setup anything regarding authentication, so my script uses the Compute Engine default service account . 我目前尚未设置任何有关身份验证的内容,因此我的脚本使用了Compute Engine默认服务帐户 。
So far so good, however, I need to stop using this default service account, and switch to a separate service account, with an API key file. 但是到目前为止,我需要停止使用此默认服务帐户,并使用API密钥文件切换到单独的服务帐户。 That's where the problems start. 那就是问题开始的地方。
My plan was to mount my API key in the container through a Kubernetes Secret, and then use the GOOGLE_APPLICATION_CREDENTIALS
(documented here ) to have it loaded automatically, with the following (simplified) configuration : 我的计划是通过Kubernetes Secret将我的API密钥安装在容器中,然后使用GOOGLE_APPLICATION_CREDENTIALS
( 在此处记录 )通过以下(简化)配置自动加载它:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-name
spec:
schedule: "0 1 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: some-name
image: some-image-path
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/credentials/credentials.json"
volumeMounts:
- name: credentials
mountPath: /credentials
volumes:
- name: credentials
secret:
secretName: some-secret-name
But apparently, the gcloud
tool behaves differently from the programming-languages SDKs, and ignores this env variable completely. 但是显然, gcloud
工具的行为不同于编程语言SDK,并且完全忽略了该env变量。
The image documentation isn't much help either, since it only gives you a way to change the gcloud config location. 映像文档也没有太大帮助,因为它仅提供了一种更改gcloud配置位置的方法。
Moreover, I'm pretty sure that I'm going to need a way to provide some extra configuration to gcloud down the road (project, zone, etc…), so I guess my solution should give me the option to do so from the start. 此外,我非常确定我将需要一种方法来为gcloud将来的项目(项目,区域等)提供一些额外的配置,所以我想我的解决方案应该可以让我选择开始。
I've found a few ways to work around the issue : 我找到了解决此问题的几种方法:
Change the entrypoint script of my image, to read environment variables, and execute env preparation with gcloud
commands : 更改映像的入口点脚本,以读取环境变量,并使用gcloud
命令执行环境准备:
That's the simplest solution, and the one that would allow me to keep my Kubernetes configuration the cleanest (each environment only differs by some environment variables). 这是最简单的解决方案,也是一种使我保持Kubernetes配置最干净的解决方案(每个环境仅因某些环境变量而异)。 It requires however maintaining my own copy of the image I'm using, which I'd like to avoid if possible. 但是,这需要维护我自己使用的图像的副本,如果可能的话,我希望避免这种情况。
Override the entrypoint of my image with a Kubernetes configMap mounted as a file : 使用安装为文件的Kubernetes configMap覆盖我的映像的入口点:
This option is probably the most convenient : execute a separate configmap for each environment, where I can do whatever environment setup I want (such as gcloud auth activate-service-account --key-file /credentials/credentials.json
). 此选项可能是最方便的:针对每个环境执行一个单独的configmap,在这里我可以执行我想要的任何环境设置(例如gcloud auth activate-service-account --key-file /credentials/credentials.json
)。 Still, it feels hacky, and is hardly readable compared to env variables. 尽管如此,它还是很笨拙,与env变量相比几乎不可读。
Manually provide configuration files for gcloud
(in /root/.config/gcloud
) : 手动提供gcloud
配置文件(位于/root/.config/gcloud
):
I suppose this would be the cleanest solution, however, the configuration syntax doesn't seem really clear, and I'm not sure how easy it would be to provide this configuration through a configMap. 我想这将是最干净的解决方案,但是,配置语法似乎并不十分清楚,而且我不确定通过configMap提供此配置有多么容易。
As you can see, I found ways to work around my issue, but none of them satisfies me completely. 如您所见,我找到了解决问题的方法,但是没有一个方法可以完全满足我的要求。 Did I miss something ? 我错过了什么 ?
For the record, here is the solution I finally used, although it's still a workaround in my opinion : 作为记录,这是我最终使用的解决方案,尽管我认为这仍然是一种解决方法:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-name
spec:
schedule: "0 1 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: some-name
image: some-image-path
imagePullPolicy: Always
command: ["/bin/bash", "/k8s-entrypoint/entrypoint.sh"]
volumeMounts:
- name: credentials
mountPath: /credentials
- name: entrypoint
mountPath: /k8s-entrypoint
volumes:
- name: credentials
secret:
secretName: some-secret-name
- name: entrypoint
configMap:
name: entrypoint
With the following ConfigMap : 使用以下ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: entrypoint
data:
entrypoint.sh: |
#!/bin/bash
gcloud auth activate-service-account --key-file /credentials/credentials.json
# Chainload the original entrypoint
exec sh -c /path/to/original/entrypoint.sh
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.