[英]Terraform enable EKS cluster access for other IAM users
I want to set up a EKS cluster, enabling other IAM users to connect and tinker with the cluster.我想设置一个 EKS 集群,使其他 IAM 用户能够连接和修改该集群。 To do so, AWS recommends patching a config map , which I did.
为此, AWS 建议修补配置 map ,我这样做了。 Now I want to enable the same “feature” using terraform.
现在我想使用 terraform 启用相同的“功能”。
I use terraforms EKS provider and read in the documentation in section "Due to the plethora of tooling a..." that basically authentication is up to myself.我使用 terraforms EKS 提供程序,并阅读了“由于过多的工具...”部分的文档,基本上身份验证取决于我自己。
Now I use the Terraform Kubernetes provider to update this config map:现在我使用Terraform Kubernetes 提供程序来更新此配置 map:
resource "kubernetes_config_map" "aws_auth" {
depends_on = [module.eks.cluster_id]
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = THATS_MY_UPDATED_CONFIG
}
But do not succeed and get the following error:但不成功并得到以下错误:
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: 2022/01/07 15:49:55 [DEBUG] Kubernetes API Response Details:
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: ---[ RESPONSE ]--------------------------------------
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: HTTP/2.0 409 Conflict
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Content-Length: 206
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Audit-Id: 15....
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Cache-Control: no-cache, private
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Content-Type: application/json
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Date: Fri, 07 Jan 2022 14:49:55 GMT
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: X-Kubernetes-Pf-Flowschema-Uid: f43...
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: X-Kubernetes-Pf-Prioritylevel-Uid: 0054...
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5:
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: {
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "kind": "Status",
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "apiVersion": "v1",
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "metadata": {},
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "status": "Failure",
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "message": "configmaps \"aws-auth\" already exists",
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "reason": "AlreadyExists",
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "details": {
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "name": "aws-auth",
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "kind": "configmaps"
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: },
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "code": 409
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: }
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5:
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: -----------------------------------------------------
2022-01-07T15:49:55.775+0100 [ERROR] vertex "module.main.module.eks.kubernetes_config_map.aws_auth" error: configmaps "aws-auth" already exists
╷
│ Error: configmaps "aws-auth" already exists
│
│ with module.main.module.eks.kubernetes_config_map.aws_auth,
│ on ../../modules/eks/eks-iam-map-users.tf line 44, in resource "kubernetes_config_map" "aws_auth":
│ 44: resource "kubernetes_config_map" "aws_auth" {
│
╵
It seems this is a controversial problem and as everyone using EKS and Terraform should have it – I ask myself how to solve this?这似乎是一个有争议的问题,因为每个使用 EKS 和 Terraform 的人都应该拥有它——我问自己如何解决这个问题? The related issue , I is close.... I'm somewhat lost, anyone has an idea?
相关问题, 我很接近....我有点迷茫,有人有想法吗?
I use the following versions:我使用以下版本:
terraform {
required_providers {
# https://registry.terraform.io/providers/hashicorp/aws/latest
aws = {
source = "hashicorp/aws"
version = "~> 3.70"
}
# https://registry.terraform.io/providers/hashicorp/kubernetes/latest
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.7.1"
}
required_version = ">= 1.1.2"
}
...
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.0.3"
...
I use 17.24.0 and have no idea what is new with 18.0.3.我使用 17.24.0,但不知道 18.0.3 有什么新功能。
In my case, I follow this example: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v17.24.0/examples/complete/main.tf就我而言,我遵循以下示例: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v17.24.0/examples/complete/main.tf
My main.tf我的 main.tf
locals {
eks_map_roles = []
eks_map_users = []
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
module "eks" {
source = "..."
...
eks_map_roles = local.eks_map_roles
eks_map_users = local.eks_map_users
...
}
To add another user, you can follow this docs: https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/要添加其他用户,您可以遵循以下文档: https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
I think you should add the role (don't forget to remove the path).我认为您应该添加角色(不要忘记删除路径)。
map_users is deprecated in v18.x of eks module : map_users 在 eks 模块的 v18.x 中已弃用:
Support for managing aws-auth configmap has been removed. This change also removes the dependency on the Kubernetes Terraform provider, the local dependency on aws-iam-authenticator for users, as well as the reliance on the forked http provider to wait and poll on cluster creation. To aid users in this change, an output variable aws_auth_configmap_yaml has been provided which renders the aws-auth configmap necessary to support at least the IAM roles used by the module (additional mapRoles/mapUsers definitions to be provided by users)
Assuming you are allowing the EKS module to create a ConfigMap for you which IMO is recommended as it allows you to manipulate the aws_auth configmap independently.假设您允许 EKS 模块为您创建一个 ConfigMap,建议使用 IMO,因为它允许您独立操作 aws_auth configmap。
Create a locals variable as shown below which is first pulling the default aws_auth configmap that the Terraform EKS module is creating.创建一个局部变量,如下所示,它首先提取 Terraform EKS 模块正在创建的默认 aws_auth 配置映射。 Then add the rolearn and groups mapping as needed.
然后根据需要添加 rolearn 和 groups 映射。
locals {
# Creating the AWS-AUTH
aws_auth_configmap_yaml = <<-EOT
${chomp(module.eks.aws_auth_configmap_yaml)}
- rolearn: arn:aws:iam::${data.aws_caller_identity.current.id}:role/RoleName
username: admin
groups:
- system:masters
EOT
}
and then create a kubectl_manifest resource using https://registry.terraform.io/providers/gavinbunney/kubectl/latest然后使用https://registry.terraform.io/providers/gavinbunney/kubectl/latest创建一个 kubectl_manifest 资源
resource "kubectl_manifest" "aws_auth" {
yaml_body = <<YAML
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/managed-by: Terraform
name: aws-auth
namespace: kube-system
${local.aws_auth_configmap_yaml}
YAML
depends_on = [module.eks]
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.