简体   繁体   English

无需凭据/身份验证即可从 GKE Pod 进行 GCS 读写访问

[英]GCS read write access from the GKE Pod without credentials/auth

We created the GKE cluster with the public endpoint.我们使用公共端点创建了 GKE 集群。 The service account of the GKE cluster and Node pool has the following roles. GKE集群节点池服务帐号具有以下角色。

"roles/compute.admin",
"roles/compute.viewer",
"roles/compute.securityAdmin",
"roles/iam.serviceAccountUser",
"roles/iam.serviceAccountAdmin",
"roles/resourcemanager.projectIamAdmin",
"roles/container.admin",
"roles/artifactregistry.admin",
"roles/storage.admin"

The node pool of the GKE cluster has the following OAuth scopes GKE集群的节点池有以下OAuth范围

"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/devstorage.read_write",

The private GCS bucket has the same service account as the principal, with the storage admin role.私有GCS 存储桶与主体具有相同的服务帐户,具有存储管理员角色。

When we try to read/write in this bucket from a GKE POD, we get the below error.当我们尝试从 GKE POD 读取/写入此存储桶时,我们收到以下错误。

# Read
AccessDeniedException: 403 Caller does not have storage.objects.list access to the Google Cloud Storage bucket

# Write
AccessDeniedException: 403 Caller does not have storage.objects.create access to the Google Cloud Storage object

We also checked this thread but the solution was credential oriented and couldn't help us.我们还检查了这个线程,但该解决方案是面向凭证的,无法帮助我们。 We would like to read/write without maintaining the SA auth key or any sort of credentials .我们希望在不维护 SA 身份验证密钥或任何类型的凭据的情况下进行读/写

Please guide what is missing here.请指导这里缺少的内容。


UPDATE : as per the suggestion by @boredabdel we checked and found that workload identity was already enabled on the GKE cluster as well as NodePool.更新:根据@boredabdel 的建议,我们检查并发现 GKE 集群和 NodePool 上已经启用了workload identity We are using this module to create our cluster where it is already enabled by default.我们正在使用这个模块来创建我们的集群,默认情况下它已经启用。 Still, we are facing connectivity issues.尽管如此,我们仍面临着连接问题。

Cluster Security:集群安全:

在此处输入图像描述

NodePool Security:节点池安全性:

在此处输入图像描述

This is for all who are looking for an answer to implement this solution via terraform.这适用于所有正在寻找通过 terraform 实施此解决方案的答案的人。 Please refer below:请参考以下:

Create a Kubernetes service account with annotation创建带有注释的 Kubernetes 服务帐户
resource "kubernetes_manifest" "service_account" {
  manifest = {
    "apiVersion" = "v1"
    "kind"       = "ServiceAccount"
    "metadata" = {
      "name"      = "KSA_NAME"
      "namespace" = "NAME"
      "annotations" = {
        "iam.gke.io/gcp-service-account" = "GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com"
      }
    }
    "automountServiceAccountToken" = true
  }
}
Create an IAM policy binding to allow the Kubernetes service account to act as the IAM service account.创建 IAM 策略绑定以允许 Kubernetes 服务账户充当 IAM 服务账户。
resource "google_service_account_iam_binding" "service-account-iam" {
  service_account_id = "projects/PROJECT_ID/serviceAccounts/GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com"
  role               = "roles/iam.workloadIdentityUser"
  members            = [
    "serviceAccount:${var.project_id}.svc.id.goog[NAMESPACE/KSA-NAME]",
  ]
}
Add service account to the deployment/statefulset manifest like this像这样将服务帐户添加到部署/状态集清单
spec:
  serviceAccount: KSA-NAME
  containers:

NOTE :注意

  1. GSA_PROJECT and PROJECT_ID are the same if you are using the same project for all objects.如果您对所有对象使用相同的项目,则 GSA_PROJECT 和 PROJECT_ID 是相同的。
  2. K8s service account is created via resource "kubernetes_manifest" method because there is an open issue for it to create via resource "kubernetes_service_account" method where it is throwing an error stating. K8s 服务帐户是通过resource "kubernetes_manifest"方法创建的,因为它通过resource "kubernetes_service_account"方法创建存在一个未解决的问题,它会抛出错误声明。 Waiting for default secret of "NAMESPACE/KSA_NAME" to appear

Seems like you are trying to use the Node Service Account to authenticate to GCS.似乎您正在尝试使用节点服务帐户向 GCS 进行身份验证。 You need to passe the Service Account Key to the app you are calling the API from as described in this doc您需要将服务帐户密钥传递给您正在调用 API 的应用程序,如本文档中所述

If you want Keyless authentication, my advice is to use Workload identity如果您想要无密钥身份验证,我的建议是使用工作负载身份

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 GKE pod 无法访问私有 GCS 存储桶(401 未授权错误) - GKE pod not able to access private GCS Bucket (401 unauthorized error) 使用 Dataflow 的 DLP 从 GCS 读取并写入 BigQuery - 只有 50% 的数据写入 BigQuery - Used Dataflow's DLP to read from GCS and write to BigQuery - Only 50% data written to BigQuery 在 GKE 中的 Nginx Pod 上接收来自内部 IP 的外部请求 - Receiving external request from internal IP on Nginx Pod in GKE 使用 Kube.netesPodOperator 错误从 Airflow 在 GKE / Kube.netes 上部署 DBT pod - Deploying DBT pod on GKE / Kubernetes from Airflow using KubernetesPodOperator Error 使用带有 header 的 postgres 中的 TextIO.write() 写入 GCS - Write to GCS using TextIO.write() from postgres with header 从 GCS 读取 CSV 到 class 数据流 Java - Read CSV to a class Dataflow Java from GCS 从具有访问限制的 GCS 服务 static 页面 - Serving static page from GCS with access restrictions Apache 光束 dataframe 将 csv 写入没有分片名称模板的 GCS - Apache beam dataframe write csv to GCS without shard name template 如何使用 spark-java 从 GCS 读取 csv 文件? - How to read csv file from GCS using spark-java? apache_beam,在管道期间从 GCS 存储桶中读取数据 - apache_beam, read data from GCS buckets during pipeline
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM