Terraform - kamialie/knowledge_corner GitHub Wiki

Terraform files can be in Terraform or in json format (including var files).

Auto complete for bash or zsh (run, then open new session):

$ terraform -install-autocomplete

Contents

General

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 2.70"
    }
  }
}

provider "aws" {
  profile = "default"
  region  = "us-west-2"
}

resource "aws_instance" "example" {
  ami           = "ami-830c94e3"
  instance_type = "t2.micro"
}
  • terraform block is used to provide list of providers to download from Terraform registry. Provider requirements.
  • provider block configures the named provider; explicit default will look at ~/.aws/credentials file
  • resource block defines a piece of infrastructure; first argument is provider-specific resource type, while second is the resource name; provider documentation helps to fill in required and optional attributes for resource

Workflow

init

Initializes working directory - downloads defined plugins for providers, sets backend, installs child modules.


plan

Performs implicit validate, refresh (unless explicitly disabled) and creates an execution plan. -out option can be used to generate plan to a file; can later be passed to apply command to perform changes. -destroy option generates the destroy plan.

-target option is used to focus on subset of resources. Can specify resource address (if it uses count and no explicit index is specified, applies to all) or module path (applies to all resource in module). It is recommended to split large configurations into small ones that can apply independently. Use -target for debugging, recovering from mistakes, etc.


apply

Creates defined resources; upon creation terraform.tfstate file is created to reflect all properties of the resources to be able to further manage or destroy them (this file is shared for collaborative purposes). Inspect current state with terraform show. terraform state command (along with sub-commands) is used for advanced state management.

Introduced in 0.15.2: -replace flag (also available to terraform plan take a resource address as an argument and replaces or plans the replacement of that resource.


destroy

Removes all resources defined in the configuration file. If -target option is used, it also destroys all resources that depend on the target(s) specified.


other commands

terraform fmt (fix formatting) and terraform validate (syntax, attributes, value types, etc) ensure config file is easy to read and is valid. validate also checks provider specific errors, but doesn't look into state file. Validation requires initialized working directory with all referenced plugins and modules installed. To initialize working directory for validation use only terraform init -backend=false command (as validation does not need any state file references). terraform plan includes implied validation check.

terraform fmt options:

  • -diff - display diffs of formatting changes
  • -recursive - process sub-directories as well (by default only current directory is processed)

Settings

Terraform configuration is included in terraform block and can contain the following information (only constant values can be used):

required_version of Terraform, accepts a version constraint string. Each child module can specify it's own version requirement.

terraform {
  required_version = "~> 0.14"
  # enables "example" feature
  experiments = [example]
}

State

State can be inspected with terraform state command, which outputs the list of resources' names, which could further be inspected with terraform show NAME_OF_RESOURCE command. show command has a -json option, which outputs in corresponding format that can be used by other wrappers, like jq, for example, to run queries.

terraform state subcommands:

# List object in state data
$ terraform state list

# Output state to stdout
$ terraform pull

Visual representation can be achieved by using terraform graph command, which outputs infrastructure in DOT syntax, which can be inserted into webgraphviz to output visual graph representation.

terraform.tfstate file is a json file that represents the current state of the infrastructure (managed by Terraform). Can be saved in remote (AWS, Azure, Terraform Cloud, NFS) using backend block for team collaboration. terraform.tfstate.backup is created on subsequent run in case new one got corrupted.

.terraform.tfstate.lock.info - is a lock info file that prevents concurrent runs on the configuration; created on locking and deleted on unlocking actions.

State can also be used as data source. The requested information should be exposed as output and can be accessed like data.terraform_remote_state.name.outputs.property.

data "terraform_remote_state" "name" {
  # specify backend type
  backend = "consul"

  # specify connection details
  config = {
    path    = var.path
	address = var.consul_address
	scheme  = var.scheme
  }
}

Sensitive data is stored as plain text in state file. Thus, it is up to backend to ensure encryption at rest.

terraform refresh is used to update the state file. Does not modify the infrastructure, but does modify the state file. plan and apply are implicitly calling refresh, unless -refresh=false is specified.

Backend

Backend determines where state (terraform.tfstate and .terraform.tfstate.lock.info) is stored and operations are performed. Default backend is local, which means both state and operations are performed locally. remote state does both remotely. All other backends (standard) store state remotely, but operations are performed locally through Terraform CLI.

For AWS backend option Terraform uses S3 (for state) and DynamoDB (for locking data).

A configuration can specify only one backend. Backend block can not refer to named values (locals, variables, data sources). Whenever changes are introduced to backend block terraform init must be run. When changing backends Terraform will prompt to migrate existing state to the new backend. Initialization process creates a backup of state, but be sure to make extra one yourself.

terraform {
  backend "s3" {
    region = "us-east-2"
  }
}

Partial configuration is when not all data is specified in the backend block, thus, needs to be passed by other means. If none is specified, Terraform prompts for data interactively (only for required values).

terraform init options:

  • -backend-config=PATH - specify file
  • -backend-config="KEY=VALUE" - specify key/value pairs

Config file:

address = "demo.consul.io"
path    = "example_app/terraform_state"
scheme  = "https"
$ terraform init \
	-backend-config="bucket=bucket_name" \
	-backend-config="key=path/to/file" \
	-backend-config="dynamodb_table=table_name" \
	-backend-config="access_key=value" \
	-backend-config="secret_key=value" \

If supported by backend, Terraform locks the state file on operations that can write to it. force-unlock can be used to manually unlock the state, if unlocking fails. Pass unique LOCK_ID that Terraform outputs when unlocking fails (ensures same lock is targeted).

$ terraform force-unlock LOCK_ID

Workspace

# Create a workspace
$ terraform workspace new NAME

# List workspaces
$ terraform workspace list

Terraform starts a in default workspace named default that can't be deleted. The name of current workspace is stored in terraform.workspace variable.

Workspaces isolate separate state files and organize them in terraform.tfstate.d directory (when used locally). In backend it generally appends the name of workspace to state file name.

Used as a convenient way to switch between multiple instances of a single configuration. For multiple deployments (different accounts, backends, etc) strong separation doesn't allow the use of workspaces. Also for decompositions purposes it is better to use re-usable modules.

Provider

Provider's section in registry

Terraform can specify multiple providers. Add alias to provider setting to be able to refer to specific one, when creating resources.

provider "aws" {
  # configs
  alias = "aws-1"
}

resource "aws_instance" "example" {
  # configs
  provider = aws.aws-1
}

Requirements

From version 0.14.0 Terraform supports lock file, .terraform.lock.hcl, which locks specific versions of providers and can set the minimal Terraform version that can run this configuration. terraform init -upgrade can be used to refresh versions to latest that satisfy conditions in lock file; it can also be used to downgrade, if new condition applies that.

Provider is declared by local name and object that contains unique source address and version constraint. Local names must be unique and are module-specific. By default resource use provider with the same name as prefix of the resource name, f.e. aws for aws_vpc resource.

Source address consists of 3 parts delimited by slashes:

  • hostname (optional) - hostname of Terraform registry, default is registry.terraform.io
  • namespace - namespace within registry; for public Terraform Registry and Terraform's Cloud private registry represents an organization; could have different meaning
  • type - short name, usually representing preferred local name (that resources would use by default)
terraform {
  required_providers {
    random = {
      source  = "hashicorp/random"
      version = "3.0.0"
    }

    aws = {
      source  = "hashicorp/aws"
      version = ">= 2.0.0"
    }
  }
}

Provisioner

Provisioners are used to do post-deployment configuration. Can be local (executes on local machine) or remote (executes on remote machine that is has been configured for). Also can run on creation and/or destruction of an object. Can include multiple provisioners, which will execute in the same order they appear in configuration file. If provisioner fails, Terraform does not destroy the object, but just informs the user of the error and by default marks the resource as tainted. on_failure can be set to continue to ignore provisioner failure (thus, do not make Terraform itself to fail) or fail, which is the default.

Provisioner needs the connection block that defines how provisioner will connect to resource; can reside inside object or inside provisioner.

Provisioner can refer to parent resource using self object. All attributes are accessible.

If provisioner makes use of sensitive variables or output values, the log output of execution is suppressed.

If when = destroy is specified, provisioner will run when the resource is destroyed (before). Will not run if resource if marked as tainted and if a resource block is removed completely from configuration. Need to manually set count to 0 and apply the change, then remove resource.

Common provisioners:

  • file - copy single file or contents of a directory
  • local-exec - run commands on local machine
  • remote-exec - run commands on remote resource
resource "aws_instance" "server" {
  # bla-bla

  connection {
    type        = "ssh"
    host        = self.public_ip # self is reserved variable for current resource
    user        = "ubuntu" # AMI specific
    private_key = file("path/to/private_key.pem")
  }

  provisioner "remote-exec" {
    inline = [
      "sudo apt update",
      "sudo apt install nginx -y",
      "sudo service nginx restart"
    ]
  }
}

Most cloud providers make use of cloud-init script that can be passed to VMs initialization script (user_data in AWS) section to run commands after creation.

Modules

Typical structure; none is required or have special meaning:

  • main.tf
  • variables.tf
  • output.tf
  • README.md (autogenerated?)

Provides a way to couple logical resources together. By default everything is executed in root module. It is possible to set up remote modules, provide versioning and set up separate provider block. registry.terraform.io - module registry. Module sources.

By conventions it is preferred to call Terraform resources this, since they don't have identity until they are deployed.

When using module for the first time run terraform init or terraform get to install the module. New modules will be installed in the .terraform/modules directory within configurations working directory. Local modules are symlinked, thus, changes are available immediately.

To access module's output, refer to it's label and the name of output variable.

Modules inherit provider block from enclosing configuration, thus, it is recommended not to specify provider in modules.

Terraform modules accept version constraint string. Local modules do not support versioning.

To access module itself or particular instance of a module use module.module_name[module index] addressing form. Multiple module keywords indicate nesting, f.e. module.foo[0].module.bar["a"].

Call local module and use module output:

module "web_server" {
  source = "./modules/servers" # local

  # passing variables
  web_ami     = "ami-something"
  server_name = "prod-web"
}

resource "aws_s3_bucket_object" {
  key = module.web_server.id
}

The Terraform registry is integrated into Terraform, thus, call module directly in the form <NAMESPACE>/<NAME>/<PROVIDER>.

module "consul" {
  source  = "hashicorp/consul/aws"
  version = "1.0.0"
}

Private registry (Terraform Cloud) uses the form <HOSTNAME>/<NAMESPACE>/<NAME>/<PROVIDER>, f.e. app.terraform.io/example_corp/vpc/aws.

Modules also support other meta arguments: count, for_each, providers (otherwise inherits default, un-aliased, provider from a calling module), depends_on to mark explicit dependencies.

Variables

variable "region" {
  type    = string # can be ommited, because it's a default
  default = "us-east-2"
}

provider "aws" {
  profile = "default"
  region  = var.region
}

Variable name (identifier) can contain letters, digits, underscore and hyphen. First character must not be a digit. Also can't be one of source, version, providers, count, for_each, lifecycle, depends_on, locals.

Arguments of a variable block

All arguments are optional.

Complex example:

variable "ec2_setting" {
  type = map(object({instance_type=string, monitoring=bool}))

  default = {
    "DEV" = {
      instance_type = "t2.micro",
      monitoring = false
    },
    "QA" = {
      instance_type = "t2.micro",
      monitoring = true
    }
  }
}

default

Sets the default value, thus, making setting variable's value optional. Must be literal, can't reference other objects.


type

Sets type constraint. If not present, any value is accepted. Keyword any may be used to indicate that any type is acceptable. Variable types.

Keyword types:

  • string
  • number (integer and fractional)
  • boolean

Terraform automatically converts number and boolean values to strings and vice-versa.

Type constructors:

  • list(<TYPE>) - sequence of values of the same type
  • set(<TYPE>) - unordered collection of unique values of the same type
  • map(<TYPE>) - values must be of the same type, keys are always strings
  • object({<ATTR NAME> = <TYPE>, ...}) - like a structure in C; value can be of any type
  • tuple([<TYPE>, ...] - similar to object, but more strict conversion rules

Changes in 0.15:

  • tolist([...]) replaced list()
  • tomap([...]) replaced map()

list and map keywords are shorthands for list(any) and map(any). It is recommended to use the latter in new code. Both colons (:) and equal signs (=) can be used between keys and their values. Quotes may be omitted on keys, unless the key starts with a number.

Set type to any to apply no constraints at all, and, thus, no implied conversions take place. Conversion of complex types.


description

Variable documentation, shows up on command line, if prompted for value.


validation

Custom validation rules.

Example:

variable "var" {
  type    = string
  default = "eu-east-1"

  validation {
    condition     = substr(var.region, 0, 3) == "eu-"
    error_message = "Please, enter EU region"
  }
}
  • condition must use the value of the variable and return true or false; must not produce erros; if producing error is the basis for validation, use can function

     condition = can(regex("^ami-", var.image_id))
  • error_message - string that will show up on CLI in case of error

Condition for lb naming in AWS:

condition = length(var.resource_tags["project"]) <= 16 && length(regexall("/[^a-zA-Z0-9-]/", var.resource_tags["project"])) == 0

sensitive

Limits Terraform UI output. Values are sent to providers as is and can be even disclosed by provider error message or progress message (as of v0.14.*, see updates in v0.15.0).

variable "name" {
  sensitive = true
}

Sources of variables in root module

Significance order from least:

  • default
  • environment value (with TF_VAR prefix)
  • terraform.tfvars
  • terraform.tfvars.json
  • *.auto.tfvars
  • *.auto.tfvars.json
  • command line -var or -var-file options
  • command line prompt, if not specified by other means

environment variables

Prefix env variable name with TF_VAR_. In OS that supports env variables with case sensitive names, Terraform matches names exactly, thus, should be specified with mix of upper and lower case letters.

$ export TF_VAR_region=us-east-2

tfvars

Variable definitions file (filename ending with .tfvars or .tfvars.json) enables setting of multiple variables. Terraform automatically loads terraform.tfvars and *.auto.tfvars files (and their json complements), anything else must be provided with -var-file flag). Example of terraform.tfvars:

region = "us-east-2"

command line

Pass variable name and values with -var option; can be used any number of times in a single command.

$ terraform apply -var 'region=us-east-1'

If variables are not specified in any of the previous form, then Terraform will ask to input them interactively (UI).

String interpolation uses ${var} syntax. Conditional directive uses %{ } syntax. if else endif for conditionals, for endfor for loops.

"${var-prefix}-app
"%{ if var.prefix != ""}${var.prefix}-app%{ else }generic-app%{ endif }"

Named values

Filesystem and workspace

  • path.module - filesystem path where module is placed
  • path.root - filesystem path of the root module
  • path.cwd - filesystem path of current working directory; in normal Terraform use same as path.root, in advanced usage might differ
  • terraform.workspace - name of the currently selected workspace

Block-local values

  • count.index - in resources using count
  • each.key, each.value - in resources using for_each
  • self - in provisioner and connection block

locals

Assign name to expressions to be reused inside module, where it was declared. Can reference variables, resource attributes, and even other locals. Access value of a local using local.<NAME expression.

locals {
  # Ids for multiple sets of EC2 instances, merged together
  instance_ids = concat(aws_instance.blue.*.id, aws_instance.green.*.id)
}

locals {
  # Common tags to be assigned to all resources
  common_tags = {
    Service = local.service_name
    Owner   = local.owner
  }
}

resource "aws_instance" "example" {
  # ...

  tags = local.common_tags
}

Meta arguments

Meta arguments add logic to code interpretation by Terraform.

  • single meta argument, for example, count = 2 would create 2 instances

  • block meta arguments, should go at the end of block definition (following example implements behavior of first creating a resource, before destroying old one):

     resource "aws_instance" "one" {
       lifecycle {
         create_before_destroy = true
       }
     }

Style conventions: first meta arguments, then single arguments, then block arguments, and lastly block meta arguments (all logical blocks are separated by empty line).

provider

Specify which provider configuration to use, overriding terraform default behavior of selecting one based on resource name (prefix). Value is unquoted <PROVIDER>.<ALIAS> reference.

provider "google" {
  region = "us-central1"
}

# alternate configuration, whose alias is "europe"
provider "google" {
  alias  = "europe"
  region = "europe-west1"
}

resource "google_compute_instance" "example" {
  provider = google.europe

  # ...
}

lifecycle

Nested block that modifies resource behaviour.

  • create_before_destroy (bool) - change the order of resource update; some resources can not be created, if previous one still exists (f.e. with the same name); set to false for resources that can not support this option due to limitation
  • prevent_destroy (bool) - absolutely prevent destroying the resource
  • ignore_changes (list of attribute names) - accepts list of resource attributes that will be ignored to qualify for resource recreation (update); can specify all to ignore any changes, thus, resource will never be updates; can not specify itself or other meta-arguments
resouce "aws_instance" "some" {
  lifecycle {
    create_before_destroy = true
    ignore_changes = ["ami", "user_data"]
  }
}

depends_on

Accepts a list of resources that should be created before the current one. Should be considered as last resort and contain good explanation of why it was used. Necessary only when a resource of a module relies on other resource's behavior but does not access any of that resource's data.

resouce "aws_instance" "some" {
  depends_on = [aws_instance.db]
}

count

Create multiple resource or modules. Provides count.index variable that can be used inside the block to reference the current iteration (starts from zero). Accepts numeric expressions, but can't refer to other resource attributes.

resource "aws_instance" "example" {
  count         = 3

  ami           = "ami-830c94e3"
  instance_type = "t2.micro"

  tags {
    Name = "example-${count.index}"
  }
}

for_each

Accepts a map or set of strings and creates one instance of a resource for each member of a collection. Provides each.key and each.value (same as key, if collection is set) variables to access contents on each iteration. Does not implicitly convert lists and tuples to sets, thus, use toset function.

variable "vpc_settings" {
  default {
    prod = "10.10.0.0/16"
    dev = "10.20.0.0/16"
  }
}

# default aws value for vpc_cidr will be used
module "some" {
  count  = 2

  source = "./path/to/module"
  env      = "demo-${count.index + 1}"
}

module "other" {
  for_each = var.vpc_settings

  source   = "./path/to/module"
  env      = each.key
  vpc_cidr = each.value
}

Data sources

Allows data to be fetched or computed for use elsewhere in configuration. Can get info from outside or another configuration. Supports same meta-arguments as managed resources (except lifecycle). Access attributes via data.<TYPE>.<NAME>.<ATTRIBUTE> reference.

Local-only data sources operate within Terraform and exist temporary during execution (re-calculated on every run). F.e. rendering templates, reading local files, and rendering AWS IAM policies.

data "aws_ami" "webserver_ami" {
  most_recent = true

  owners = ["self"]
  tags = {
    Name   = "Webserver"
	Deploy = "true"
  }
}

resource "aws_instance" "web" {
  ami = data.aws_ami.webserver_ami.id
}

http

data.http.my_ip.body, following example returns the IP of the caller.

data "http" "my_ip" {
  url = "http://ifconfig.me"
}

consul

[to be continued]

Output

Output resource is used to show specific information upon apply or for later inspection via terraform output {var}.

output "ip" {
  value = aws_eip.ip.public_ip
}

To access output values from the module, first, they should be exposed by the module, and then, can be accessed (exposed) by root module:

output "dns" {
  value = module.module_name.output_variable
}
# Inspect the value after execution
$ terraform output VAR_NAME

# Get the raw value of an output value, can be used to pass to another command
$ terraform output -raw VAR_NAME
$ ssh terraform@$(terraform output -raw public_ip)

Expressions

Expressions are used to refer to or compute values within configuration.

splat

Replacement for for expression. F.e. [for item in var.list : item.id] can be written as var.list[*].id. Works only with lists, sets, and tuples. for_each expects map value, thus, splat can't be used with those. When applied to not compatible types, splat returns either empty (if value is null) or single (all other cases) element tuple value.

dynamic block

Generates repeating nested blocks inside resource, data source, provider or provisioner blocks. Iterates over complex value and creates a block for each element. Name of dynamic block indicates the type of block. Optional iterator argument sets the name of temporary variables that represents element of complex value. If omitted, the label of dynamic block is used. Iterator has key (map key or list index, in case of set identical to value, thus, shouldn't be used) and value attributes. Nested content block defines the body of each generator blocks.

resource "aws_security_group" "prod" {
  name = "Dynamic security group"

  dynamic "ingress" {
    for_each = ["80", "443", "8080"]

    content {
      from_port   = ingress.value
      to_port     = ingress.value
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }
}

version constraint

version constraint string

  • modules
  • provider requirements
  • required_version setting in terraform block

Version constraint is a string literal with one or more conditions, separated by commas. Each conditions consists of operator and version number. ~> (pessimistic constraint operator) allows only patch (rightmost) number to be incremented. Other operators are =, !=, >, >=, <, <=. Version number is series of numbers, separated by periods, optionally with a suffix. A prerelease version is the one with suffix - can only be matched directly - =, or no operator).

version = ">= 1.2.0, < 2.0.0"

Functions

Docs

Terraform only supports built-in functions. Common categories:

  • numeric
  • string
  • collection
  • filesystem
  • IP network
  • date and time

Functions can be chained, meaning a function call can be an argument for another function call.

examples

  • merge function is used to merge objects of the same type into one (for example, two maps - common tags from variables.tf and individual tag of the resource):

     resource "aws_instance" "webserver" {
       ami           = "ami-830c94e3"
       instance_type = "t2.micro"
    
       vpc_security_group_ids = [aws.security_group.wb_sg.id] # dependency
    
       tags = merge(var.common_tags, { Name = "my instance" })
     }
  • cirdsubnet function helps to break a range into multiple subnets; second argument adds that number of bits to the mask, third specifies which range to take; cirdhost returns the numbered host IP in the given range (second argument specifies the order), first is taken as network address:

     variable "network_info" {
       default = "10.1.0.0/16"
     }
    
     # Returns 10.1.0.0/24
     cird_block = cirdsubnet(var.network_info, 8, 0)
    
     # Returns 10.1.0.5/16
     host_ip = cirdhost(var.network_info, 5)
  • lookup function is used to get the value from a map, passing a default value as well (3rd argument), if key doesn't exist:

     variable "amis" {
       type = "map"
    
       default = {
         "us-east-1" = "ami-1234"
         "us-west-1" = "ami-5678"
       }
     }
    
     ami = lookup(var.amis, "us-east-1", "default-value")

Tricks

Import

Importing existing resources into Terraform. Add appropriate resources and run terraform plan to acquire configuration identifier. Not all providers and resources support importing. May have to set up some local variables, as import command runs locally and does not have access to backed. Terraformer can be used to automate some of the steps of the importing process.

# ADDR - configuration resource identifier
# f.e. module.vpc.aws_subnet.public[2]
# ID - provider specific resource identifier
# f.e. subbet-adj594df
$ terraform import [options] ADDR ID

Workflow

  • identify the existing infra to be imported
  • import infra to terraform state
  • write config that matches that infra
  • review the Terraform plan
  • apply configuration to state

Templates

Template data source (in-line syntax requires double $ to indicate nested variable). To get the value of the template reference its rendered attribute - data.template.example.rendered. Template argument value can also be a path to external template file.

data "template_file" "example" {
  count    = 2
  template = "$${var1}-$${current_count}"

  var = {
    var1 = var.some_string
    current_count = count.index
  }
}

templatefile function works as a shortcut of template data source with external file. Takes path to template and map of variables to pass as parameters.

resource "aws_instance" "webserver" {
  ami           = "ami-830c94e3"
  instance_type = "t2.micro"

  vpc_security_group_ids = [aws.security_group.wb_sg.id] # dependency

  # tpl is common convention for this type of files
  # second argument is used for providing variables of various types
  user_data = templatefile("init.sh.tpl", {
    name     = "learner"
    chapters = ["terraform", "aws", "that's it really"]
  })
}

External file:

#!/bin/bash
yum -y update
yum -y install httpd
myip =`curl http://169.254.169.254/latest/meta-data/local-ipv4`
echo "<h2>My webserver at $myip<h2>" > /var/www/html/index.html
cat <<EOF > /var/www/html/index.html
<h2>My name is ${name}</h2>
Here is list of chapters already covered:<br>
{% for chapter in chapters ~}
- ${chapter}
{% endfor ~}
EOF
sudo service httpd start
chkconfig httpd on

Taint

$ terraform taint [options] address

# Example, also can specify list of resources or even a module
# Use state list command to find exact address
$ terraform state list
$ terraform taint aws_instance.example

$ terraform untaint [options] address

Mark a resource for recreation. Depended resources are not automatically tainted.


Environment variables

  • TF_IN_AUTOMATION - let's Terraform know it runs in automation framework, so don't produce output that can mess it up; set to any value (better to TRUE)
  • TF_LOG - sets the verbosity level; one of TRACE, DEBUG, INFO, WARN or ERROR; TRACE is the default (if something else was specified) and the most verbose option; start with INFO or WARN
  • TF_LOG_PATH - set to path of file, where to write logs instead of stdout
  • TF_INPUT - Terraform errors out if encounters a situation requiring user input; set to FALSE
  • TF_VAR_name = "value"
  • TF_CLI_ARGS = "-input=false"

Introduced in 0.15 (separate logging control):

  • TF_LOG__CORE
  • TF_LOG__PROVIDER

  • random_integer resource (it's inside a separate plugin)

  • retrieve a list of resources (if f.e. count was used) and return it to a parameter that expects a list of items

     resource "aws_elb" "main" {
       # configs
    
       instances = aws_instance.web[*].id
     }
  • refer to resource that has more than one instances

     resource "aws_eip_association" "prod_web" {
       instance_id   = aws_instance.prod_web[0].id
       # instance_id   = aws_instance.prod_web.0.id # also available dot syntax
       allocation_id = aws_eip.prod_web.id
     }
    
     resource "aws_default_subnet" "default_az1" {
       availability_zone = "us-east-1"
     }
    
     resource "aws_elb" "prod_web" {
       name            = "prod-web"
       instances       = aws_instance.prod_web[*].id
       subnets         = [aws_default_subnet.default_az1.id]
       security_groups = [aws_security_group.prod_web.id]
    
       listener {
         instance_port     = 80
         instance_protocol = "http"
         lb_port           = 80
         lb_protocol       = "http"
       }
     }
    
  • dependency example (webserver); from_port and to_port are used to define a range, so if single port is appointed, both variables have the same value:

     # user_data section is for bootsraping
     # don't leave whitespace in front
     resource "aws_instance" "webserver" {
       ami                    = "ami-830c94e3"
       instance_type          = "t2.micro"
    
       vpc_security_group_ids = [aws.security_group.wb_sg.id] # dependency
    
       user_data	             = <<EOF
     #!/bin/bash
     yum -y update
     yum -y install httpd
     myip =`curl http://169.254.169.254/latest/meta-data/local-ipv4`
     echo "<h2>My webserver at $myip<h2>" > /var/www/html/index.html
     sudo service httpd start
     chkconfig httpd on
     EOF
    
     }
    
     resource "aws_security_group" "wb_sg" {
       name          = "WebServer Security Group"
    
       ingress {
         from_port   = 80
         to_port     = 80
         protocol    = "tcp"
         cidr_blocks = ["0.0.0.0/0"]
       }
    
       ingress {
         from_port   = 443
         to_port     = 443
         protocol    = "tcp"
         cidr_blocks = ["0.0.0.0/0"]
       }
    
       egress {
         from_port   = 0
         to_port     = 0
         protocol    = "-1" # any
         cidr_blocks = ["0.0.0.0/0"]
       }
     }
  • using external static files (provide shell script from previous example):

     resource "aws_instance" "webserver" {
       ami           = "ami-830c94e3"
       instance_type = "t2.micro"
    
       vpc_security_group_ids = [aws.security_group.wb_sg.id] # dependency
    
       user_data = file("user_data.sh") # relative to this configuration file
     }
  • decouple aws_eip into aws_eip_association, so that first one becomes independent of instance:

     resource "aws_eip_association" "prod_web" {
       instance_id   = aws_instance.prod_web.id
       allocation_id = aws_eip.prod_web.id
     }
    
     resource "aws_eip" "prod_web" {
       tags {
         "Terraform" : "true"
       }
     }
  • best practice - add tag Terraform - true to any resource to easily distinguish Terraform managed resource while in AWS UI

  • to get information from provider about various resource (not necessary created by you) use data sources.

     data "aws_availability_zones" "available" {}
    
     output "aws_availability_zones" {
       value = data.aws_availability_zones.available.names
     }

    Other good aws data sources:

    • aws_caller_identity

    • aws_vpc

    • aws_vpcs

    • aws_ami - for automated instance creation, without looking up the exact ami in particular zone; to find owner id go to Amazon AMIs page, public images and insert ami (info will be in details), for value get the unchanged part of the name followed by wildcard

       data "aws_ami" "latest_ubuntu" {
         owners = ["read-text-above-for-value"]
         most_recent = true
      
         filter {
           name = "name" # choose type of filter
           values = ["read-text-above-for-value-*"]
         }
       }

console

$ terraform console

# Exit console with `exit`

Great way to test functions and other stuff. Must run terraform init first. F.e. to check contents of generated dynamic file (before actually applying to infrastructure) simply run templatefile with the same arguments to see the result.

Info

Terraform Cloud and Enterprise are the same product, Cloud being hosted at https://app.terraform.io/ and Enterprise being a self-hosted version.

⚠️ **GitHub.com Fallback** ⚠️