简体   繁体   中英

`Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB"` when creating kubernetes cluster with terraform

Hi I am playing around with kube.netes and terraform in a google cloud free tier account (trying to use the free 300$). Here is my terraform resource declaration, it is something very standard I copied from the terraform resource page. Nothing particularly strange here.

resource "google_container_cluster" "cluster" {
  name = "${var.cluster-name}-${terraform.workspace}"
  location = var.region
  initial_node_count = 1
  project = var.project-id
  remove_default_node_pool = true
}

resource "google_container_node_pool" "cluster_node_pool" {
  name       = "${var.cluster-name}-${terraform.workspace}-node-pool"
  location   = var.region
  cluster    = google_container_cluster.cluster.name
  node_count = 1

  node_config {
    preemptible  = true
    machine_type = "e2-medium"
    service_account = google_service_account.default.email
    oauth_scopes    = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
  }
}

This terraform snippet used to work fine. In order to not burn through the 300$ too quickly, at the end of every day I used to destroy the cluster with terraform destroy . However one day the kube.netes cluster creation just stopped working. Here is the error:

Error: googleapi: Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB": request requires '300.0' and is short '50.0'. project has a quota of '250.0' with '250.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=xxxxxx., forbidden

It looks like something didn't get cleaned up after all the terraform destroy and eventually some quota built up and I am not able to create a cluster anymore. I am still able to create a cluster through the google cloud web interface (I tried only with autopilot, and in the same location). I am a bit puzzled why this is happening. Is my assumption correct? Do I need to delete something that doesn't get deleted automatically with terraform? if yes why? Is there a way to fix this and be able to create the cluster with terraform again?

Go to Compute Engine > Disks, and check if there are any disks in the specified region which is consuming the quota.

This error says that the request will require 300GB of SSD and you have a quota of 250 GB in that region. This error generally occurs when the quota is exhausted . You can read more about the type of disk quotas here . You can also request a quota increase if you want.

I am not able to understand why this request needs 300 GB of SSD as I am not quite familiar with Terraform. From the code, it seems that you are creating only one node. As per the terraform doc , the "disk_size_gb" defaults to 100GB. So, it should take only 100 GB. Try to set a smaller size in the "disk_size_gb" in "node_config" and check if it helps.

I was able to fix this by creating the cluster in autopilot mode using enable_autopilot = true rather than manually creating the node_pool (through terraform) and letting google take care of that. However I am afraid that I may have just hidden the problem under the carpet as the cluster may be initially created with a small disk and then it gets scaled up as needed.

I had a similar issue that was resolved by moving my clusters to a different region. You can view the quotas for a region by replacing $PROJECT-ID$ with your project-id https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=$PROJECT-ID$ and heading to that link.

If you filter the list to persistent disk ssd (GB) you should see a list of all the available regions along with their quotas.

过滤到永久性磁盘 ssd (gb)

Hope that helps.

在此处输入图像描述

Otherwise, your best bet is to request a quota increase for the region you desire.

If you wanna create Kube.netes cluster using standard configuration then make sure you have selected maximum number of nodes as 2 or less only. You need to check "Node pool details". To change or check that follow below steps:

  1. Create Cluster
  2. Select Standard and click configure
  3. Provide cluster name, location of your choice and etc things.
  4. On the left panel locate " NODE POOLS " and click on "default-pool"
  5. Locate or search on the browser page "Number of nodes". You will find text box to add number of nodes. In your case it must more than 2 nodes that's why you are getting error. Other option is to increase the Limit of your desired region and that may need approval from Google as per new policy.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM