简体   繁体   English

错误:发布“http://localhost/api/v1/namespaces/kube-system/configmaps”:拨打 tcp 127.0.0.1:80

[英]Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80

I'm trying to deploy a cluster with self managed node groups.我正在尝试部署具有自我管理节点组的集群。 No matter what config options I use, I always come up with the following error:无论我使用什么配置选项,我总是会出现以下错误:

Error: Post " http://localhost/api/v1/namespaces/kube-system/configmaps ": dial tcp 127.0.0.1:80: connect: connection refusedwith module.eks-ssp.kube.netes_config_map.aws_auth[0]on.terraform/modules/eks-ssp/aws-auth-configmap.tf line 19, in resource "kube.netes_config_map" "aws_auth":resource "kube.netes_config_map" "aws_auth" {错误:发布“ http://localhost/api/v1/namespaces/kube-system/configmaps ”:拨打 tcp 127.0.0.1:80: connect: connection refusedwith module.eks-ssp.kube.netes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf 第 19 行,在资源“kube.netes_config_map”“aws_auth”中:资源“kube.netes_config_map”“aws_auth”{

The.tf file looks like this: .tf 文件如下所示:

module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

# EKS CLUSTER
tenant            = "DevOpsLabs2"
environment       = "dev-test"
zone              = ""
terraform_version = "Terraform v1.1.4"

# EKS Cluster VPC and Subnet mandatory config
vpc_id             = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

# EKS CONTROL PLANE VARIABLES
create_eks         = true
kubernetes_version = "1.19"

# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name        = "DevOpsLabs2"
subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
custom_ami_id          = "xxx"
public_ip              = true                   # Enable only for public subnets
pre_userdata           = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT

disk_size     = 20
instance_type = "t2.small"
desired_size  = 2
max_size      = 10
min_size      = 2
capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

k8s_labels = {
Environment = "dev-test"
Zone        = ""
WorkerType  = "SELF_MANAGED_ON_DEMAND"
}

additional_tags = {
ExtraTag    = "t2x-on-demand"
Name        = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}

module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"

eks_cluster_id                        = module.eks-ssp.eks_cluster_id

# EKS Addons
enable_amazon_eks_vpc_cni             = true
enable_amazon_eks_coredns             = true
enable_amazon_eks_kube_proxy          = true
enable_amazon_eks_aws_ebs_csi_driver  = true

#K8s Add-ons
enable_aws_load_balancer_controller   = true
enable_metrics_server                 = true
enable_cluster_autoscaler             = true
enable_aws_for_fluentbit              = true
enable_argocd                         = true
enable_ingress_nginx                  = true

depends_on = [module.eks-ssp.self_managed_node_groups]
}

Providers:供应商:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.6.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

The above answer from Marko E seems to fix / just ran into this issue. Marko E的上述回答似乎解决了/刚刚遇到了这个问题。 After applying the above code, altogether in a separate providers.tf file, terraform now makes it past the error.应用上述代码后,在一个单独的providers.tf文件中,terraform 现在可以通过错误。 Will post later as to whether the deployment makes it fully through.稍后将发布部署是否完全通过。

For reference was able to go from 65 resources created down to 42 resources created before I hit this error.作为参考,在我遇到此错误之前,能够将 go 从创建的 65 个资源减少到创建的 42 个资源。 This was using the exact best practice / sample configuration recommended at the top of the README from AWS Consulting here: https://github.com/aws-samples/aws-eks-accelerator-for-terraform这是使用 AWS 咨询公司 README 顶部推荐的最佳实践/示例配置: https://github.com/aws-samples/aws-eks-accelerator-for-terraform

Based on the example provided in the Github repo [1], my guess is that the provider configuration blocks are missing for this to work as expected.根据 Github 存储库 [1] 中提供的示例,我的猜测是缺少provider配置块以使其按预期工作。 Looking at the code provided in the question, it seems that the following needs to be added:查看问题中提供的代码,似乎需要添加以下内容:

data "aws_region" "current" {}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  region = data.aws_region.current.id
  alias  = "default" # this should match the named profile you used if at all
}

provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

If helm is also required, I think the following block [2] needs to be added as well:如果还需要helm ,我认为还需要添加以下块 [2]:

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}

Provider argument reference for kubernetes and helm is in [3] and [4] respectively. kuberneteshelm的提供程序参数参考分别在 [3] 和 [4] 中。


[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47 [1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23 -L47

[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55 [2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55

[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference [3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference

[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference [4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 Terraform 修复 Azure Kube.netes 服务“错误拨号 tcp 127.0.0.1:80:连接:连接被拒绝”? - How to fix Azure Kubernetes Services with Terraform 'error dial tcp 127.0.0.1:80: connect: connection refused'? kube-system 命名空间中缺少 aws-node pod - aws-node pod is missing in kube-system namespace 如何更改命名空间 kube-system 的 CPU 限制 - How to change CPU Limit for namespace kube-system AWS RDS 的 Terraform Postgresql 提供程序错误:“拨号 tcp 127.0.0.1:5432:连接:连接被拒绝” - Terraform Postgresql provider error for AWS RDS: "dial tcp 127.0.0.1:5432: connect: connection refused" 向 curl 和 api v1 主题发送 firebase 通知时出错 - Error when sending firebase notification to topics with curl and api v1 无法在命名空间“kube-system”中创建服务帐户“argocd-manager” - Failed to create service account "argocd-manager" in namespace "kube-system" Firebase 静音推送通知 HTTP V1 - Firebase Silence Push Notification with HTTP V1 Kube.netes 挑战等待 http-01 传播:拨打 tcp:没有这样的主机 - Kubernetes challenge waiting for http-01 propagation: dial tcp: no such host ERROR: (gcloud.beta.container.clusters.create) ResponseError: code=400, message=v1 API 无法用于访问 GKE 区域集群 - ERROR: (gcloud.beta.container.clusters.create) ResponseError: code=400, message=v1 API cannot be used to access GKE regional clusters FCM http v1 API 集成用于使用编程语言的服务器,包括 c++ 不支持 SDK - FCM http v1 API integration for servers using programming languages including c++ that does not have SDK supports
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM