繁体   English   中英

错误:发布“http://localhost/api/v1/namespaces/kube-system/configmaps”:拨打 tcp 127.0.0.1:80

[英]Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80

我正在尝试部署具有自我管理节点组的集群。 无论我使用什么配置选项,我总是会出现以下错误:

错误:发布“ http://localhost/api/v1/namespaces/kube-system/configmaps ”:拨打 tcp 127.0.0.1:80: connect: connection refusedwith module.eks-ssp.kube.netes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf 第 19 行,在资源“kube.netes_config_map”“aws_auth”中:资源“kube.netes_config_map”“aws_auth”{

.tf 文件如下所示:

module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

# EKS CLUSTER
tenant            = "DevOpsLabs2"
environment       = "dev-test"
zone              = ""
terraform_version = "Terraform v1.1.4"

# EKS Cluster VPC and Subnet mandatory config
vpc_id             = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

# EKS CONTROL PLANE VARIABLES
create_eks         = true
kubernetes_version = "1.19"

# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name        = "DevOpsLabs2"
subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
custom_ami_id          = "xxx"
public_ip              = true                   # Enable only for public subnets
pre_userdata           = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT

disk_size     = 20
instance_type = "t2.small"
desired_size  = 2
max_size      = 10
min_size      = 2
capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

k8s_labels = {
Environment = "dev-test"
Zone        = ""
WorkerType  = "SELF_MANAGED_ON_DEMAND"
}

additional_tags = {
ExtraTag    = "t2x-on-demand"
Name        = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}

module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"

eks_cluster_id                        = module.eks-ssp.eks_cluster_id

# EKS Addons
enable_amazon_eks_vpc_cni             = true
enable_amazon_eks_coredns             = true
enable_amazon_eks_kube_proxy          = true
enable_amazon_eks_aws_ebs_csi_driver  = true

#K8s Add-ons
enable_aws_load_balancer_controller   = true
enable_metrics_server                 = true
enable_cluster_autoscaler             = true
enable_aws_for_fluentbit              = true
enable_argocd                         = true
enable_ingress_nginx                  = true

depends_on = [module.eks-ssp.self_managed_node_groups]
}

供应商:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.6.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

Marko E的上述回答似乎解决了/刚刚遇到了这个问题。 应用上述代码后,在一个单独的providers.tf文件中,terraform 现在可以通过错误。 稍后将发布部署是否完全通过。

作为参考,在我遇到此错误之前,能够将 go 从创建的 65 个资源减少到创建的 42 个资源。 这是使用 AWS 咨询公司 README 顶部推荐的最佳实践/示例配置: https://github.com/aws-samples/aws-eks-accelerator-for-terraform

根据 Github 存储库 [1] 中提供的示例,我的猜测是缺少provider配置块以使其按预期工作。 查看问题中提供的代码,似乎需要添加以下内容:

data "aws_region" "current" {}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  region = data.aws_region.current.id
  alias  = "default" # this should match the named profile you used if at all
}

provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

如果还需要helm ,我认为还需要添加以下块 [2]:

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}

kuberneteshelm的提供程序参数参考分别在 [3] 和 [4] 中。


[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23 -L47

[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55

[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference

[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM