Challenge Lab Automating Infrastructure on Google Cloud with Terraform - ghdrako/doc_snipets GitHub Wiki

Task 1. Create the configuration files

  1. In Cloud Shell, create your Terraform configuration files and a directory structure that resembles the following:
main.tf
variables.tf
modules/
└── instances
    ├── instances.tf
    ├── outputs.tf
    └── variables.tf
└── storage
    ├── storage.tf
    ├── outputs.tf
    └── variables.tf
touch main.tf variables.tf
mkdir -p modules/instances
mkdir -p modules/storage
touch modules/instances/instances.tf
touch modules/instances/outputs.tf
touch modules/instances/variables.tf
touch modules/storage/storage.tf
touch modules/storage/outputs.tf
touch modules/storage/variables.tf
  1. Fill out the variables.tf files in the root directory and within the modules. Add three variables to each file: region, zone, and project_id. For their default values, use us-central1, us-central1-a, and your Google Cloud Project ID.
variables.tf:

variable "region" {
  type    = string
  default = "us-central1"
}

variable "project_id" {
  type    = string
  default = "????"
}

variable "zone" {
  type    = string
  default = "us-central1-a"
}
  1. Add the Terraform block and the Google Provider to the main.tf file. Verify the zone argument is added along with the project and region arguments in the Google Provider block.
main.tf:

terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
      version = "3.55.0"
    }
  }
}

provider "google" {
  project     = var.project_id
  region      = var.region
  zone        = var.zone 
}

4 Initialize Terraform.

terraform init

Task 2. Import infrastructure

  1. In the Google Cloud Console, on the Navigation menu, click Compute Engine > VM Instances. Two instances named tf-instance-1 and tf-instance-2 have already been created for you.

  2. Import the existing instances into the instances module. To do this, you will need to follow these steps:

  • First, add the module reference into the main.tf file then re-initialize Terraform.
main.tf:
module "instances" {
  source     = "./modules/instances"
}
  • Next, write the resource configurations in the instances.tf file to match the pre-existing instances.

    Name your instances tf-instance-1 and tf-instance-2. For the purposes of this lab, the resource configuration should be as minimal as possible. To accomplish this, you will only need to include the following additional arguments in your configuration: machine_type, boot_disk, network_interface, metadata_startup_script, and allow_stopping_for_update. For the last two arguments, use the following configuration as this will ensure you won't need to recreate it:

metadata_startup_script = <<-EOT
        #!/bin/bash
    EOT
allow_stopping_for_update = true
modules/instances/instances.tf:

resource "google_compute_instance" "tf-instance-1" {
  name         = "tf-instance-1"
  machine_type = "n1-standard-1"
  zone         = var.zone
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-10"
    }
  }
  network_interface {
     network = "default"
  }
  metadata_startup_script = <<-EOT
        #!/bin/bash
    EOT
  allow_stopping_for_update = true
}

resource "google_compute_instance" "tf-instance-2" {
  name         = "tf-instance-2"
  machine_type = "n1-standard-1"
  zone         = var.zone
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-10"
    }
  }
  network_interface {
    network = "default"
  }
  metadata_startup_script = <<-EOT
        #!/bin/bash
    EOT
  allow_stopping_for_update = true
}
  • Once you have written the resource configurations within the module, use the terraform import command to import them into your instances module.
terraform init # install instance module !!!
terraform import module.instances.google_compute_instance.tf-instance-1 <Instance ID - 1>
terraform import module.instances.google_compute_instance.tf-instance-2 <Instance ID - 2>
  1. Apply your changes. Note that since you did not fill out all of the arguments in the entire configuration, the apply will update the instances in-place. This is fine for lab purposes, but in a production environment, you should make sure to fill out all of the arguments correctly before importing.
terraform plan
terraform apply

Task 3. Configure a remote backend

  1. Create a Cloud Storage bucket resource inside the storage module. For the bucket name, use . For the rest of the arguments, you can simply use:
    location = "US"
    force_destroy = true
    uniform_bucket_level_access = true

Note: You can optionally add output values inside of the outputs.tf file.

modules/storage/storage.tf:

resource "google_storage_bucket" "storage-bucket" {
  name          = var.project_id
  location      = "US"
  force_destroy = true
  uniform_bucket_level_access = true
}

module "storage" {
  source     = "./modules/storage"
}
terraform init # import storage module
terraform apply
  1. Add a local backend to your main.tf file:
main.tf:

terraform {
  backend "local" {
    path = "terraform/state/terraform.tfstate"
  }
}
  1. Configure this storage bucket as the remote backend inside the main.tf file. Be sure to use the prefix terraform/state so it can be graded successfully.
main.tf:

terraform {
 ... 
 backend "gcs" {
    bucket  = "<FILL IN PROJECT ID>"
 prefix  = "terraform/state"
  }
}

  1. If you've written the configuration correctly, upon init, Terraform will ask whether you want to copy the existing state data to the new backend. Type yes at the prompt.
terraform init

Task 4. Modify and update infrastructure

  • Navigate to the instances module and modify the tf-instance-1 resource to use an n1-standard-2 machine type.
  • Modify the tf-instance-2 resource to use an n1-standard-2 machine type.
  • Add a third instance resource and name it . For this third resource, use an n1-standard-2 machine type.
modules/instances/instances.tf:


resource "google_compute_instance" "tf-instance-3" {
  name         = "tf-instance-3"
  machine_type = "n1-standard-2"
  zone         = var.zone
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-10"
    }
  }
  network_interface {
    network = "default"
  }
  metadata_startup_script = <<-EOT
        #!/bin/bash
    EOT
  allow_stopping_for_update = true
}

  • Initialize Terraform and apply your changes.
terraform init
terraform apply

Task 5. Taint and destroy resources

  1. Taint the third instance , and then plan and apply your changes to to recreate it.
terraform taint module.instances.google_compute_instance.tf-instance-3
terraform init
terraform apply
  1. Destroy the third instance by removing the resource from the configuration file. After removing it, initialize terraform and apply the changes.
# remove from modules/instances/instances.tf
terraform apply

Task 6. Use a module from the Registry

  1. In the Terraform Registry, browse to the Network Module.
  2. Add this module to your main.tf file. Use the following configurations:
  • Use version 3.4.0 (different versions might cause compatibility errors).
  • Name the VPC , and use a global routing mode.
  • Specify 2 subnets in the us-central1 region, and name them subnet-01 and subnet-02. For the subnets arguments, you just need the Name, IP, and Region.
  • Use the IP 10.10.10.0/24 for subnet-01, and 10.10.20.0/24 for subnet-02.
  • You do not need any secondary ranges or routes associated with this VPC, so you can omit them from the configuration.
main.tf:
module "vpc" {
    source  = "terraform-google-modules/network/google"
    version = "~> 2.5.0"

    project_id   = var.project_id
    network_name = "terraform-vpc"
    routing_mode = "GLOBAL"

    subnets = [
        {
            subnet_name           = "subnet-01"
            subnet_ip             = "10.10.10.0/24"
            subnet_region         = "us-central1"
        },
        {
            subnet_name           = "subnet-02"
            subnet_ip             = "10.10.20.0/24"
            subnet_region         = "us-central1"
            subnet_private_access = "true"
            subnet_flow_logs      = "true"
            description           = "This subnet has a description"
        }
    ]
}

  1. Once you've written the module configuration, initialize Terraform and run an apply to create the networks.
terraform init
terraform apply
  1. Next, navigate to the instances.tf file and update the configuration resources to connect tf-instance-1 to subnet-01 and tf-instance-2 to subnet-02.
modules/instances/instances.tf:
resource "google_compute_instance" "tf-instance-1" {
  name         = "tf-instance-1"
  machine_type = "n1-standard-2"
  zone         = var.zone
  allow_stopping_for_update = true

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-10"
    }
  }

  network_interface {
 network = "terraform-vpc"
    subnetwork = "subnet-01"
  }
}

resource "google_compute_instance" "tf-instance-2" {
  name         = "tf-instance-2"
  machine_type = "n1-standard-2"
  zone         = var.zone
  allow_stopping_for_update = true

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-10"
    }
  }

  network_interface {
 network = "terraform-vpc"
    subnetwork = "subnet-02"
  }
}
terraform init
terraform apply

Task 7. Configure a firewall

  • Create a firewall rule resource in the main.tf file, and name it tf-firewall. This firewall rule should permit the network to allow ingress connections on all IP ranges (0.0.0.0/0) on TCP port 80. Make sure you add the source_ranges argument with the correct IP range (0.0.0.0/0). Initialize Terraform and apply your changes.


resource "google_compute_firewall" "tf-firewall" {
  name    = "tf-firewall"
  network = "projects/<PROJECT_ID>/global/networks/terraform-vpc"
  allow {
    protocol = "tcp"
    ports    = ["80"]
  }
  source_tags = ["web"]
  source_ranges = ["0.0.0.0/0"]
}
terraform init
terraform apply
⚠️ **GitHub.com Fallback** ⚠️