简体   繁体   English

Terraform:使用 azurerm_kubernetes_cluster 创建 AKS 集群时指定网络安全组规则

[英]Terraform: Specify network security group rules when creating an AKS cluster using azurerm_kubernetes_cluster

I am using the Terraform azurerm provider version 1.19 to create an AKS cluster.我正在使用 Terraform azurerm 提供程序 1.19 版来创建 AKS 集群。 I'd like to specify network security group rules when creating the cluster but I can't figure out how to reference the security group that is created since the generated security group is given a name with random numbers.我想在创建集群时指定网络安全组规则,但我不知道如何引用创建的安全组,因为生成的安全组的名称带有随机数。

Something like:就像是:

aks-agentpool-33577837-nsg aks-agentpool-33577837-nsg

Is there a way to reference the created nsg or atleast output the random number used in the name?有没有办法引用创建的 nsg 或至少输出名称中使用的随机数?

Configuration to create the cluster:创建集群的配置:

resource "azurerm_resource_group" "k8s" {
  name     = "${var.resource_group_name}"
  location = "${var.location}"
}

resource "azurerm_kubernetes_cluster" "k8s" {
  name                = "${var.cluster_name}"
  location            = "${azurerm_resource_group.k8s.location}"
  resource_group_name = "${azurerm_resource_group.k8s.name}"
  dns_prefix          = "${var.dns_prefix}"
  kubernetes_version  = "${var.kubernetes_version}"

  linux_profile {
    admin_username = "azureuser"

    ssh_key {
      key_data = "${file("${var.ssh_public_key}")}"
    }
  }

  agent_pool_profile {
    name    = "default"
    count   = "${var.agent_count}"
    vm_size = "${var.vm_size}"
    os_type = "Linux"
  }

  service_principal {
    client_id     = "${var.client_id}"
    client_secret = "${var.client_secret}"
  }

  tags {
    source      = "terraform"
    environment = "${var.environment}" 
  }
}

This generates a security group which I'd like to add additional rules to.这会生成一个安全组,我想向其中添加其他规则。 Here's a rule I'd like to add so the nginx-controller's liveness probe can be checked.这是我想添加的规则,以便可以检查 nginx 控制器的活跃度探测器。

resource "azurerm_network_security_rule" "nginx_liveness_probe" {
  name                        = "nginx_liveness"
  priority                    = 100 
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "${var.nginx_liveness_probe_port}"
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  resource_group_name         = "${azurerm_kubernetes_cluster.k8s.node_resource_group}"
  network_security_group_name = How do I reference the auto-generated nsg ?
  description = "Allow access to nginx liveness probe"
}

The solution which answers your question:回答您问题的解决方案:

data "azurerm_resources" "example" {
  resource_group_name = azurerm_kubernetes_cluster.example.node_resource_group

  type = "Microsoft.Network/networkSecurityGroups"
}

output name_nsg {
    value = data.azurerm_resources.example.resources.0.name
}

resource "azurerm_network_security_rule" "example" {
  name                        = "example"
  priority                    = 100
  direction                   = "Outbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "*"
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  resource_group_name         = azurerm_kubernetes_cluster.example.node_resource_group
  network_security_group_name = data.azurerm_resources.example.resources.0.name
}

.. and then in the same way add all your rules. .. 然后以同样的方式添加所有规则。

In general it is better and advised to make use of the automated way Azure Kubernetes Service reacts to the creation of Kubernetes Services, not to use more Terraform (although below Kubernetes yaml can also be used with the Kubernetes Terraform provider ):一般来说,最好使用 Azure Kubernetes 服务对 Kubernetes 服务的创建做出反应的自动化方式,而不是使用更多的 Terraform(尽管下面 Kubernetes yaml 也可以与Kubernetes Terraform provider 一起使用):

A network security group filters traffic for VMs, such as the AKS nodes.网络安全组筛选 VM(例如 AKS 节点)的流量。 As you create Services, such as a LoadBalancer, the Azure platform automatically configures any network security group rules that are needed.在创建服务(例如 LoadBalancer)时,Azure 平台会自动配置所需的任何网络安全组规则。 Don't manually configure network security group rules to filter traffic for pods in an AKS cluster.不要手动配置网络安全组规则来筛选 AKS 群集中 Pod 的流量。 Define any required ports and forwarding as part of your Kubernetes Service manifests, and let the Azure platform create or update the appropriate rules.将任何必需的端口和转发定义为 Kubernetes 服务清单的一部分,并让 Azure 平台创建或更新适当的规则。 You can also use network policies, as discussed in the next section, to automatically apply traffic filter rules to pods.您还可以使用网络策略,如下一节所述,自动将流量过滤规则应用于 pod。

It should be possible within the current setup by simple creating a Kubernetes Service to those ports you want to disclose.通过简单地为您要公开的端口创建 Kubernetes 服务,在当前设置中应该是可能的。

For instance, when I deploy a ingress controller, the creation of the Kubernetes Service triggers the creation of an IP address/Loadbalancer with its NSG:例如,当我部署入口控制器时,Kubernetes 服务的创建会触发 IP 地址/负载均衡器及其 NSG 的创建:

apiVersion: v1
kind: Service
metadata:
  name: ingress-ingress-nginx-controller
  namespace: ingress
spec:
  loadBalancerSourceRanges:
  - 8.8.8.8/32
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer

By creating a Kubernetes Service (type LoadBalancer) which maps to the wanted pod port and with loadBalancerSourceRanges specified, a similar setup for your custom destination can be specified.通过创建映射到所需 pod 端口并指定 loadBalancerSourceRanges 的 Kubernetes 服务(类型 LoadBalancer),可以为您的自定义目标指定类似的设置。

apiVersion: v1
kind: Service
metadata:
  name: mycustomservice
  namespace: myownnamespace
spec:
  loadBalancerSourceRanges:
  - 8.8.8.8/32 # your source IPs
  - 9.9.9.9/32
  ports:
  - name: myaccessport
    port: 777
    protocol: TCP
    targetPort: mydestinationport
  selector:
    app.kubernetes.io/name: myapp
  type: LoadBalancer

See also the issue in the azurerm provider GitHub另请参阅azurerm 提供程序 GitHub 中的问题

A little late here, but just came across this issue.这里有点晚,但刚刚遇到这个问题。 So, for anyone still looking for a solution, this is what I ended up doing to get the AKS NSG name:因此,对于仍在寻找解决方案的任何人,这就是我最终为获得 AKS NSG 名称所做的工作:

Add this to the *.tf file provisioning your AKS:将此添加到预配 AKS 的 *.tf 文件中:

resource "azurerm_network_security_rule" "http" {
  name                        = "YOUR_NAME"
  priority                    = 102
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "80"
  destination_port_range      = "*"
  source_address_prefixes     = "${var.ips}"
  destination_address_prefix  = "${azurerm_public_ip.ingress.ip_address}"
  resource_group_name         = "${azurerm_kubernetes_cluster.test.node_resource_group}"
  network_security_group_name = "${data.external.aks_nsg_name.result.output}"

  depends_on = ["azurerm_resource_group.test"]
}

# get the NSG name
data "external" "aks_nsg_name" {
  program = [
    "bash",
    "${path.root}/scripts/aks_nsg_name.sh"
  ]

  depends_on = [azurerm_resource_group.test]
}

Create aks_nsg_name.sh in your project and add the following:在您的项目中创建 aks_nsg_name.sh 并添加以下内容:

#!/bin/bash 
OUTPUT=$(az network nsg list --query [].name -o tsv | grep aks-agentpool | head -n 1)
jq -n --arg output "$OUTPUT" '{"output":$output}'

Your AKC is added to an azurerm_resource_group , which you provisioned using Terraform, I assume.我假设您的 AKC 已添加到azurerm_resource_group ,您使用 Terraform 配置该组。 If so, you can add a custom azurerm_network_security_group with any number of azurerm_network_security_rule to that resource group, as detailed here .如果是这样,您可以将具有任意数量的azurerm_network_security_rule的自定义azurerm_network_security_group添加到该资源组, 详情见此处

Example:例子:

resource "azurerm_resource_group" "test" {
  name     = "acceptanceTestResourceGroup1"
  location = "West US"
}

resource "azurerm_network_security_group" "test" {
  name                = "acceptanceTestSecurityGroup1"
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_network_security_rule" "test" {
  name                        = "test123"
  priority                    = 100
  direction                   = "Outbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "*"
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  resource_group_name         = "${azurerm_resource_group.test.name}"
  network_security_group_name = "${azurerm_network_security_group.test.name}"
}

Unfortunately, the name parameter is required on network security group data sources, and wildcards do not seem to be supported, or else that would have been an option as well.不幸的是,网络安全组数据源需要name参数,并且似乎不支持通配符,否则这也是一个选项。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Terraform Azurerm_kubernetes_cluster无效或未知密钥:network_profile - Terraform azurerm_kubernetes_cluster invalid or unknown key: network_profile azurerm_kubernetes_cluster - AKS 在尝试请求时遇到内部错误 - azurerm_kubernetes_cluster - AKS encountered an internal error while attempting the requested 是否可以使用Terraform回收azurerm_kubernetes_cluster service_principal:client_secret - Is it possible to use Terraform to recycle the azurerm_kubernetes_cluster service_principal:client_secret only 使用 terraform 为 AKS 集群创建虚拟网络 - Create a virtual network using terraform for AKS cluster “DuplicateResourceName”创建 azurerm.network_security_group 动态规则 - "DuplicateResourceName" creating azurerm_network_security_group dynamic rules 使用 terraform 在 aks azure 上设置 elasticsearch 集群 - setup elasticsearch cluster on aks azure using terraform 使用 Terraform 将 AKS 群集附加到现有 VNET - Attach an AKS Cluster to an existing VNET using Terraform 使用子网 ID 的引用创建 AKS 群集时出错 - Error when creating AKS cluster using a reference for the subnet ID 使用 kubectl 在 AKS 集群中创建机密时出错 - Error in creating secret in AKS cluster using kubectl 将卷添加到 Azure 上的 Terraform AKS 群集时出现错误“没有这样的主机” - Error 'no such host' when adding a volume to Terraform AKS cluster on Azure
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM