简体   繁体   English

如何使用 terraform 部署简约的 EKS 集群?

[英]How to deploy a minimalistic EKS cluster with terraform?

Friends,朋友们,

I am completely new to Terraform but I am trying learn here.我对 Terraform 完全陌生,但我正在尝试在这里学习。 At the moment I am reading the book Terraform UP and Running but I need to spin up an EKS cluster to deploy one of my learning projects.目前我正在阅读 Terraform UP and Running 这本书,但我需要启动一个 EKS 集群来部署我的一个学习项目。 For this, I am following this [tutorial][1] of Hashicorp.为此,我正在关注 Hashicorp 的 [tutorial][1]。

My main questions are the following: Do I really need all of this (see the terraform code for aws bellow) to deploy a cluster on AWS?我的主要问题如下:我真的需要所有这些(请参阅下面的 aws 的 terraform 代码)在 AWS 上部署集群吗? How could I reduce the bellow code to the minimum necessary to spin up a cluster with a master and one worker which are able to communicate with each other?我怎样才能将波纹管代码减少到最少需要启动一个具有能够相互通信的 master 和一个 worker 的集群?

On Gloud I could spin up a cluster with just these few lines of code:在 Gloud 上,我可以用这几行代码启动一个集群:

provider "google" {
    credentials     = file(var.credentials)
    project         = var.project
    region          = var.region
}

resource "google_container_cluster" "primary" {
  name = var.cluster_name
  network = var.network
  location = var.region
  initial_node_count = var.initial_node_count
}

resource "google_container_node_pool" "primary_preemtible_nodes" {
  name = var.node_name
  location = var.region
  cluster = google_container_cluster.primary.name
  node_count = var.node_count

  node_config {
    preemptible = var.preemptible
    machine_type = var.machine_type
  }
}

Can I do something similar do spin up an EKS cluster?我可以做类似的事情来启动 EKS 集群吗? The code bellow is working but I feel like I am biting more than I can chew.下面的代码正在工作,但我觉得我咬得比我能咀嚼的多。

provider "aws" {
    region = "${var.AWS_REGION}"
    secret_key = "${var.AWS_SECRET_KEY}"
    access_key = "${var.AWS_ACCESS_KEY}"
  
}

# ----- Base VPC Networking -----

data "aws_availability_zones" "available_zones" {}

# Creates a virtual private network which will isolate
# the resources to be created.
resource "aws_vpc" "blur-vpc" {
    #Specifies the range of IP adresses for the VPC.
    cidr_block = "10.0.0.0/16"
    tags = "${
        map(
            "Name", "terraform-eks-node",
            "kubernetes.io/cluster/${var.cluster-name}", "shared"
        )
    }"
}

resource "aws_subnet" "subnet" {
  count = 2

  availability_zone = "${data.aws_availability_zones.available_zones.names[count.index]}"
  cidr_block        = "10.0.${count.index}.0/24"
  vpc_id            = "${aws_vpc.blur-vpc.id}"

  tags = "${
    map(
     "Name", "blur-subnet",
     "kubernetes.io/cluster/${var.cluster-name}", "shared",
    )
  }"
}

# The component that allows communication between 
# the VPC and the internet.
resource "aws_internet_gateway" "gateway" {
    # Attaches the gateway to the VPC.
    vpc_id = "${aws_vpc.blur-vpc.id}"

    tags = {
        Name = "eks-gateway"
    }
}

# Determines where network traffic from the gateway
# will be directed. 
resource "aws_route_table" "route-table" {
  vpc_id = "${aws_vpc.blur-vpc.id}"

  route {
      cidr_block = "0.0.0.0/0"
      gateway_id = "${aws_internet_gateway.gateway.id}"
  }
}

resource "aws_route_table_association" "table_association" {
    count = 2
    subnet_id       = "${aws_subnet.subnet.*.id[count.index]}"
    route_table_id  = "${aws_route_table.route-table.id}"
  
}

# -- Resources required for the master setup --

# This bellow block (IAM role + Policy) allows the EKS service to 
# manage or retrieve data from other AWS services.

# Similar to a IAM but not uniquely associated with one person.
# A role can be assumed by anyone who needs it.
resource "aws_iam_role" "blur-iam-role" {
  name = "eks-cluster"
  assume_role_policy = <<POLICY
{
      "Version": "2012-10-17",
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                "Service": "eks.amazonaws.com"
            },
              "Action": "sts:AssumeRole"
        }
    ]
}
  POLICY
}

# Attaches the policy "AmazonEKSClusterPolicy" to the role created above. 
resource "aws_iam_role_policy_attachment" "blur-iam-role-AmazonEKSClusterPolicy" {
    policy_arn  = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
    role        = "${aws_iam_role.blur-iam-role.name}"
  
}

# Master security group

# # A security group acts as a virtual firewall to control inbound and outbound traffic.
# This security group will control networking access to the K8S master.
resource "aws_security_group" "blur-cluster" {
    name            = "eks-blur-cluster"
    description     = "Allows the communucation with the worker nodes"
    vpc_id          = "${aws_vpc.blur-vpc.id}"

    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }

    tags = {
        Name = "blur-cluster"
    }
}

# The actual master node
resource "aws_eks_cluster" "blur-cluster" {
    name = "${var.cluster-name}"
    # Attaches the IAM role created above.
    role_arn = "${aws_iam_role.blur-iam-role.arn}"

    vpc_config {
        # Attaches the security group created for the master.
        # Attaches also the subnets.
        security_group_ids  = ["${aws_security_group.blur-cluster.id}"]
        subnet_ids          = "${aws_subnet.subnet.*.id}"
    }

    depends_on = [ 
        "aws_iam_role_policy_attachment.blur-iam-role-AmazonEKSClusterPolicy",
        # "aws_iam_role_policy_attachment.blur-iam-role-AmazonEKSServicePolicy"
     ]
}

# -- Resources required for the worker nodes setup --

# IAM role for the workers. Allows worker nodes to manage or retrieve data
# from other services and  its required for the workers to join the cluster.
resource "aws_iam_role" "iam-role-worker"{
    name = "eks-worker"
    assume_role_policy = <<POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
POLICY
}

# allows Amazon EKS worker nodes to connect to Amazon EKS Clusters.
resource "aws_iam_role_policy_attachment" "iam-role-worker-AmazonEKSWorkerNodePolicy" {
    policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
    role = "${aws_iam_role.iam-role-worker.name}"
}

# This permission is required to modify the IP address configuration of worker nodes
resource "aws_iam_role_policy_attachment" "iam-role-worker-AmazonEKS_CNI_Policy" {
    policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
    role = "${aws_iam_role.iam-role-worker.name}"
}

# Allows to list repositories and pull images
resource "aws_iam_role_policy_attachment" "iam-role-worker-AmazonEC2ContainerRegistryReadOnly" {
    policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    role = "${aws_iam_role.iam-role-worker.name}"

}

# An instance profile represents an EC2 instances (Who am I?)
# and assumes a role (what can I do?).
resource "aws_iam_instance_profile" "worker-node" {
    name = "worker-node"
    role = "${aws_iam_role.iam-role-worker.name}"
}

# Security group for the worker nodes

resource "aws_security_group" "security-group-worker" {
    name = "worker-node"
    description = "Security group for worker nodes"
    vpc_id = "${aws_vpc.blur-vpc.id}"
    egress {
        cidr_blocks = [ "0.0.0.0/0" ]
        from_port = 0
        to_port = 0
        protocol = "-1"
    }

    tags = "${
      map(
          "Name", "blur-cluster",
          "kubernetes.io/cluster/${var.cluster-name}", "owned"
      )
    }"
}

resource "aws_security_group_rule" "ingress-self" {
    description = "Allow communication among nodes"
    from_port = 0
    to_port = 65535
    protocol = "-1"
    security_group_id = "${aws_security_group.security-group-worker.id}"
    source_security_group_id = "${aws_security_group.security-group-worker.id}"
    type = "ingress"
}

resource "aws_security_group_rule" "ingress-cluster-https" {
    description = "Allow worker to receive communication from the cluster control plane"
    from_port = 443
    to_port = 443
    protocol = "tcp"
    security_group_id = "${aws_security_group.security-group-worker.id}"
    source_security_group_id = "${aws_security_group.blur-cluster.id}"
    type = "ingress"
    
}

resource "aws_security_group_rule" "ingress-cluster-others" {
    description = "Allow worker to receive communication from the cluster control plane"
    from_port = 1025
    to_port = 65535
    protocol = "tcp"
    security_group_id = "${aws_security_group.security-group-worker.id}"
    source_security_group_id = "${aws_security_group.blur-cluster.id}"
    type = "ingress"
}

# Worker Access to Master

resource "aws_security_group_rule" "cluster-node-ingress-http" {
    description                     = "Allows pods to communicate with the cluster API server"
    from_port                       = 443
    to_port                         = "443"
    protocol                        = "tcp"
    security_group_id               = "${aws_security_group.blur-cluster.id}"
    source_security_group_id        = "${aws_security_group.security-group-worker.id}"
    type                            = "ingress"
  
}

# --- Worker autoscaling group ---
# This data will be used to filter and select an AMI which is compatible with the specific k8s version being deployed
data "aws_ami" "eks-worker" {
    filter {
      name = "name"
      values = ["amazon-eks-node-${aws_eks_cluster.blur-cluster.version}-v*"]
    }

    most_recent = true
    owners = ["602401143452"] 
}

data "aws_region" "current" {}

locals {
    node-user-data =<<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.blur-cluster.endpoint}'
USERDATA
}

# To spin up an auto scaling group an "aws_launch_configuration" is needed. 
# This ALC requires an "image_id" as well as a "security_group".
resource "aws_launch_configuration" "launch_config" {
    associate_public_ip_address     = true
    iam_instance_profile        = "${aws_iam_instance_profile.worker-node.name}"
    image_id                    = "${data.aws_ami.eks-worker.id}"
    instance_type               = "t2.micro"
    name_prefix                 = "terraform-eks"
    security_groups             = ["${aws_security_group.security-group-worker.id}"]
    user_data_base64            = "${base64encode(local.node-user-data)}"
    lifecycle {
      create_before_destroy     = true
    }
  
}

# Actual autoscaling group
resource "aws_autoscaling_group" "autoscaling" {
    desired_capacity = 2
    launch_configuration        = "${aws_launch_configuration.launch_config.id}" 
    max_size                    = 2
    min_size                    = 1
    name                        = "terraform-eks"
    vpc_zone_identifier         = "${aws_subnet.subnet.*.id}"

    tag {
      key = "Name"
      value = "terraform-eks"
      propagate_at_launch = true
    }

# "kubernetes.io/cluster/*" tag allows EKS and K8S to discover and manage compute resources.
    tag {
      key                       = "kubernetes.io/cluster/${var.cluster-name}"
      value                     = "owned"
      propagate_at_launch       = true
    }
}

  [1]: https://registry.terraform.io/providers/hashicorp/aws/2.33.0/docs/guides/eks-getting-started#preparation

Yes, you should create most of them, because as you can see at Terraform AWS documents, VPC configuration is required to deploy EKS cluster.是的,您应该创建其中的大部分,因为您可以在 Terraform AWS 文档中看到,部署 EKS 集群需要 VPC 配置。 But you don't have to set up a security group rule for workers to access the master.但是您不必为工作人员设置安全组规则来访问主服务器。 Also, try to use aws_eks_node_group resource to create worker nodegroup.另外,尝试使用aws_eks_node_group资源来创建工作节点组。 It will save you from creating launch configuration and autoscaling group seperately.它将使您免于单独创建启动配置和自动缩放组。

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM