简体   繁体   中英

How to add an ssh key to an GCP instance using terraform?

So I have a terraform script that creates instances in Google Cloud Platform, I want to be able to have my terraform script also add my ssh key to the instances I create so that I can provision them through ssh. Here is my current terraform script.

#PROVIDER INFO
provider "google" {
  credentials = "${file("account.json")}"
  project     = "myProject"
  region      = "us-central1"
}


#MAKING CONSUL SERVERS
resource "google_compute_instance" "default" {
  count    =  3
  name     =  "a-consul${count.index}"
  machine_type = "n1-standard-1"
  zone         = "us-central1-a"

  disk {
    image = "ubuntu-1404-trusty-v20160627"
  }

  # Local SSD disk
  disk {
    type    = "local-ssd"
    scratch = true
  }

  network_interface {
    network = "myNetwork"
    access_config {}
  }
}

What do I have to add to this to have my terraform script add my ssh key /Users/myUsername/.ssh/id_rsa.pub ?

I think something like this should work:

  metadata = {
    ssh-keys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
  }

https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys describes the metadata mechanism, and I found this example at https://github.com/hashicorp/terraform/issues/6678

Just for the record. As of 0.12 it seems the block should look like:

resource "google_compute_instance" "default" {
  # ...

  metadata = {
    ssh-keys = join("\n", [for user, key in var.ssh_keys : "${user}:${key}"])
  }

  # ...
}

(Note = sign after metadata token and ssh-keys vs. sshKeys ).

Here is tested one.

  metadata {
    sshKeys = "${var.ssh_user}:${var.ssh_key} \n${var.ssh_user1}:${var.ssh_key1}"
}

If you want multiple keys you can use heredoc like this

  metadata = {
    "ssh-keys" = <<EOT
<user>:<key>
<user>:<key>
EOT
  }

I stayed with the weird formatting here in the post that terraform fmt provided me.

You can use the following

metadata = {
  ssh-keys = "username:${file("username.pub")}"
}

I was struggling to create an instance with the ssh key using terraform & this answer is tested & working as well.

Just updating for multiple keys in Terraform v0.15.4 :

metadata = {
    ssh-keys = join("\n", [for key in var.ssh_keys : "${key.user}:${key.publickey}"])
}

And accoring variables:

variable "ssh_keys" {
  type = list(object({
    publickey = string
    user = string
  }))
  description = "list of public ssh keys that have access to the VM"
  default = [
      {
        user = "username"
        publickey = "ssh-rsa yourkeyabc username@PC"
      }
  ]
}

I have below working for me: for all vms a single ssh key

resource "google_compute_project_metadata" "my_ssh_key" {
  metadata = {
    ssh-keys = <<EOF
      terakey:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqaF7TqtimTUtqLdZIspKjuTXXXXnkbW7N9TQBPXazu terakey
      
    EOF
  }
}

I tested below ways of injecting ssh public key to a google compute instance and its working for me.

  metadata = {
    ssh-keys = "${var.ssh_user}:${file("./gcp_instance_ssh_key.pub")}"
  OR 
    ssh-keys  = "${var.ssh_user}:${file(var.public_key_path)}"

  OR
    ssh-keys  = "${var.ssh_user}:${file("${var.public_key_path}")}"
   
  }

variable "public_key_path" {
    default = "./gcp_instance_ssh_key.pub"   ##public key with path
}

Please note to use ssh-keys instead of ssh_keys (with underscore)

First, you'll need a compute instance:

resource "google_compute_instance" "website_server" {
  name                      = "webserver"
  description               = "Web Server"
  machine_type              = "f1-micro"
  allow_stopping_for_update = true
  deletion_protection       = false

  tags = ["webserver-instance"]

  shielded_instance_config {
    enable_secure_boot          = true
    enable_vtpm                 = true
    enable_integrity_monitoring = true
  }

  scheduling {
    provisioning_model  = "STANDARD"
    on_host_maintenance = "TERMINATE"
    automatic_restart   = true
  }

  boot_disk {
    mode        = "READ_WRITE"
    auto_delete = true
    initialize_params {
      image = "ubuntu-minimal-2204-jammy-v20220816"
      type  = "pd-balanced"
    }
  }

  network_interface {
    network = "default"

    access_config {
      network_tier = "PREMIUM"
    }
  }

  metadata = {
    ssh-keys               = "${var.ssh_user}:${local_file.public_key.content}"
    block-project-ssh-keys = true
  }

  labels = {
    terraform = "true"
    purpose   = "host-static-files"
  }

  service_account {
    # Custom service account with restricted permissions
    email  = data.google_service_account.myaccount.email
    scopes = ["compute-rw"]
  }

}

Note that ssh-keys field in the metadata needs the public key data in "Authorized Keys" format, ie, the open SSH public key. This is similar to doing a pbcopy < ~/.ssh/id_ed25519.pub

You'll need a firewall rule to allow SSH on (default) port 22:

resource "google_compute_firewall" "webserver_ssh" {
  name    = "webserver-firewall"
  network = "default"

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  target_tags   = ["webserver-instance"]
  source_ranges = ["0.0.0.0/0"]
}

Your public and private keys can be ephemeral to make things more seamless:

resource "tls_private_key" "webserver_access" {
  algorithm = "ED25519"
}

resource "local_file" "public_key" {
  filename        = "server_public_openssh"
  content         = trimspace(tls_private_key.webserver_access.public_key_openssh)
  file_permission = "0400"
}

resource "local_sensitive_file" "private_key" {
  filename = "server_private_openssh"
  # IMPORTANT: Newline is required at end of open SSH private key file
  content         = tls_private_key.webserver_access.private_key_openssh
  file_permission = "0400"
}

And finally, to login you would need a connection string based on:

output "instance_connection_string" {
  description = "Command to connect to the compute instance"
  value       = "ssh -i ${local_sensitive_file.private_key.filename} ${var.ssh_user}@${google_compute_instance.website_server.network_interface.0.access_config.0.nat_ip} ${var.host_check} ${var.ignore_known_hosts}"
  sensitive   = false
}

where the variable file could look like:

variable "ssh_user" {
  type        = string
  description = "SSH user for compute instance"
  default     = "myusername"
  sensitive   = false
}

variable "host_check" {
  type        = string
  description = "Dont add private key to known_hosts"
  default     = "-o StrictHostKeyChecking=no"
  sensitive   = false
}

variable "ignore_known_hosts" {
  type        = string
  description = "Ignore (many) keys stored in the ssh-agent; use explicitly declared keys"
  default     = "-o IdentitiesOnly=yes"
  sensitive   = false
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM