简体   繁体   English

Azure devops 管道 terraform 错误 - 尝试角色分配时出现 403

[英]Azure devops pipeline terraform error - 403 when attempting role assignment

I'm attempting to deploy an aks cluster and role assignment for the system assigned managed identity that is created via terraform but I'm getting a 403 response我正在尝试为通过 terraform 创建的系统分配托管身份部署 aks 集群和角色分配,但我收到 403 响应

azurerm_role_assignment.acrpull_role: Creating...
╷
│ Error: authorization.RoleAssignmentsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '626eac40-c9dd-44cc-a528-3c3d3e069e85' with object id '626eac40-c9dd-44cc-a528-3c3d3e069e85' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/7b73e02c-dbff-4eb7-9d73-e73a2a17e818/resourceGroups/myaks-rg/providers/Microsoft.ContainerRegistry/registries/aksmattcloudgurutest/providers/Microsoft.Authorization/roleAssignments/c144ad6d-946f-1898-635e-0d0d27ca2f1c' or the scope is invalid. If access was recently granted, please refresh your credentials."
│ 
│   with azurerm_role_assignment.acrpull_role,
│   on main.tf line 53, in resource "azurerm_role_assignment" "acrpull_role":
│   53: resource "azurerm_role_assignment" "acrpull_role" {
│ 
╵

This is only occurring in an Azure Devops Pipeline.这仅发生在 Azure Devops 管道中。 My pipeline looks like the following...我的管道如下所示......

trigger:
- main

pool:
  vmImage: ubuntu-latest

steps:
  
- task: TerraformInstaller@0
  inputs:
    terraformVersion: '1.0.7'

- task: TerraformCLI@0
  inputs:
    command: 'init'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
    backendType: 'azurerm'
    backendServiceArm: 'Matt Local Service Connection'
    ensureBackend: true
    backendAzureRmResourceGroupName: 'tfstate'
    backendAzureRmResourceGroupLocation: 'UK South'
    backendAzureRmStorageAccountName: 'tfstateq7nqv'
    backendAzureRmContainerName: 'tfstate'
    backendAzureRmKey: 'terraform.tfstate'
    allowTelemetryCollection: true

- task: TerraformCLI@0
  inputs:
    command: 'plan'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
    environmentServiceName: 'Matt Local Service Connection'
    allowTelemetryCollection: true

- task: TerraformCLI@0
  inputs:
    command: 'validate'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
    allowTelemetryCollection: true

- task: TerraformCLI@0
  inputs:
    command: 'apply'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
    environmentServiceName: 'Matt Local Service Connection'
    allowTelemetryCollection: false

I'm using the terraform tasks from here - https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform我正在使用这里的 terraform 任务 - https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform

This is my terraform file这是我的 terraform 文件

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.46.0"
    }
  }
}

provider "azurerm" {
   features {}
}

resource "azurerm_resource_group" "TerraformCluster" {
  name     = "terraform-cluster"
  location = "UK South"
}

resource "azurerm_kubernetes_cluster" "TerraformClusterAKS" {
  name                = "terraform-cluster-aks1"
  location            = azurerm_resource_group.TerraformCluster.location
  resource_group_name = azurerm_resource_group.TerraformCluster.name
  dns_prefix          = "terraform-cluster-aks1"

  network_profile {
    network_plugin = "azure"
  }

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }

  identity {
    type = "SystemAssigned"
  }

  tags = {
    Environment = "Production"
  }
}

data "azurerm_container_registry" "this" {
  depends_on = [
    azurerm_kubernetes_cluster.TerraformClusterAKS
  ]
  provider            = azurerm
  name                = "aksmattcloudgurutest"
  resource_group_name = "myaks-rg"
}

resource "azurerm_role_assignment" "acrpull_role" {
  scope                = data.azurerm_container_registry.this.id
  role_definition_name = "AcrPull"
  principal_id         = azurerm_kubernetes_cluster.TerraformClusterAKS.identity[0].principal_id
}

Where am I going wrong here?我哪里错了?

The Service Principal in AAD associated with the your ADO Service Connection ('Matt Local Service Connection') will need to be assigned the Owner role at the scope of the resource, or above (depending on where else you will be assigning permissions). AAD 中与您的 ADO 服务连接(“Matt 本地服务连接”)相关联的服务主体需要在资源范围内或更高范围内分配所有者角色(取决于您将分配权限的其他位置)。 You can read details about the various roles here the two most commonly used roles are Owner and Contributor, the key difference being that Owner allows for managing role assignments.您可以在此处阅读有关各种角色的详细信息,最常用的两个角色是所有者和贡献者,主要区别在于所有者允许管理角色分配。

As part of this piece of work, you should also familiarize yourself with the principle of least privilege (if you do not already know about it).作为这项工作的一部分,您还应该熟悉最小特权原则(如果您还不知道)。 How it would apply in this case would be;在这种情况下它将如何适用; if the Service Principal only needs Owner at the Resource level, then don't assign it Owner at the Resource Group or Subscription Level just because that is more convenient, you can always update the scope later on but it is much harder to undo any damage (assuming a malicious or inexperienced actor) on overly permissive role assignment after it has been exploited.如果服务主体只需要资源级别的所有者,那么不要仅仅因为这样更方便而在资源组或订阅级别为其分配所有者,您以后可以随时更新范围,但要撤消任何损害要困难得多(假设是恶意的或缺乏经验的演员)在被利用后过于宽松的角色分配。

I tried everything to get this existing storage service to re-connect to the Azure Devops Pipeline to enable Terraform deployments.我尝试了一切让这个现有的存储服务重新连接到 Azure Devops 管道以启用 Terraform 部署。

Attempted and did not work: Break lease on tf state, remove tf state, update lease on tfstate, inline commands in ADO via powershell and bash to Purge terraform, re-install the Terraform plugin etc. etc. etc.)尝试但没有成功:中断 tf state 上的租约,删除 tf state,更新 tfstate 上的租约,通过 powershell 和 bash 在 ADO 中内联命令以清除 terraform,重新安装 8831441818516 等插件)

What worked: What ended up working is to create a new storage account with a new storage container and new SAS token.有效的方法:最终起作用的是使用新的存储容器和新的 SAS 令牌创建一个新的存储帐户。

This worked to overcome the 403 forbidden error on the access of the Blob containing the TFState in ADLS from Azure Devops for Terraform deployments.这有助于克服 Terraform 部署从 Azure Devops 访问 ADLS 中包含 TFState 的 Blob 时出现的 403 禁止错误。 This does not explain the how or the why, access controls / iam / access policies did not change.这并没有解释如何或为什么,访问控制/iam/访问策略没有改变。 A tear down and recreate of the storage containing the TFState with the exact same settings under a difference storage account name worked.在不同的存储帐户名称下使用完全相同的设置拆除并重新创建包含 TFState 的存储。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 Terraform 在 Azure 中分配角色 - Role Assignment in Azure with Terraform 使用 Github CICD 运行 Azure DevOps 管道时,带有 ID 的 Terraform 资源已存在错误 - Terraform resource with the ID already exists error when running Azure DevOps pipeline with Github CICD 通过 azure devops 中的管道将 terraform 部署到 azure - Deploy terraform to azure through a pipeline in azure devops Azure 使用 terraform 创建 devops 发布管道 - Azure devops release pipeline creation using terraform Azure 角色分配 - AKS 到 ACR - Terraform - Azure Role Assignment - AKS to ACR - Terraform Azure Devops - 使用 Terraform 创建管道环境 - Azure Devops - create pipeline environment with Terraform Azure DevOps CD 管道将库部署到 Databricks DBFS 403 禁止错误 - Azure DevOps CD Pipeline to Deploy Library to Databricks DBFS 403 Forbidden Error 使用 Azure Devops 管道运行代码时缺少 Terraform local_file - Terraform local_file missing when running code using Azure Devops pipeline DevOps 管道在将变量值发送为 JSON 时更改时间格式要部署的主体 Azure 资源与 Terraform - DevOps Pipeline changing Timeformat when sending variable values as JSON Body to Deploy Azure Resources with Terraform 运行 Azure devops 发布管道时出现错误消息“设备未就绪” - Error message "The device is not ready" when running Azure devops release pipeline
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM