terraform - ghdrako/doc_snipets GitHub Wiki
- https://opentofu.org/ - fork
- https://www.runatlantis.io/guide/testing-locally.html - gitops for terraform
- https://developer.hashicorp.com/terraform/tutorials
- https://discuss.hashicorp.com/
Terraform Core - uses RPCs to communicate with Terraform plugins and offers multiple ways to discover and load plugins for use.
The Terraform core is open-source code and hosted at (https://github. com/hashicorp/Terraform). It is a consistently compiled list of binary files written in the Go programming language. These compiled binary files are called the Terraform CLI. The CLI uses RPCs to communicate with the Terraform plugins and offers multiple ways to discover and load plugins for use. The following are the key functions of the Terraform core:
- It simulates infrastructure as a code (IaC). It enables the reading and interpolation of configuration files and Terraform modules.
- It manages the state of Terraform managed resources.
- It builds a dependency graph from the Terraform configuration and walks this graph to generate Terraform plans, refresh state, and more.
- It executes the Terraform plan.
- It communicates with plugins via RPCs.
Terraform core interacts with the providers over RPCs include Diff(),Apply(), Refresh(), etc.
Terraform plugins contain key components known as providers and provisioners. Providers are code written by developers in the Go programming language. These are executable binaries that are invoked by the Terraform core using the RPCs. Terraform provisioners allow the remote execution of custom code directly on the supported platform. Each plugin exposes an implementation of a specific service such as Azure, GCP, VMware, etc.
Terraform plugins - All providers and provisioners are plugins that are defined in the Terraform configuration file. Both are executed as separate processes and communicate with the main Terraform binary via an RPC interface. Terraform has many built-in provisioners, while providers are added dynamically as and when required.
Naming schema
- terraform-provider-_vX.Y.Z
- terraform-provisioner-_vX.Y.Z
https://www.terraform.io/docs/extend/how-terraform-works.html#plugin-locations
terraform init -plugin-dir=<PATH> # override the default plugin locations and search only the path that you had specified.
terraform init -upgrade # check for new version from releases.hashicorp.com and upgrade only with .terraform/plugins/<OS>_<ARCH> (the automatic downloads directory)
terraform init -get-plugins=false # not download plugin
Terraform providers are the component that makes all the calls to HTTP APIs for specific cloud services, that is, AzureRM, GCP, or AWS.
Terraform Registry is the main directory of publicly available Terraform providers and hosts providers for most major infrastructure platforms. https://registry.terraform.io/
Providers can be defined within any file ending with .tf or .tf.json but, as per best practices, it's better to create a file with the name providers.tf or required_providers.tf
List of the Terraform community providers at https://www.terraform.io/docs/providers/type/community-index.html.
terraform fmt -recursive # perform linting to all the Terraform configuration files present in the directory
Gdybysmy skasowali cos w infrastrukturze to na podstawie tfstate terraform nam tego nie odtworzy. Jedynie mozna by recznie na postawie tego co odczytamy z tfstate taka infrastrukture zalozyc. Podobnie nie ma undo dla operacji terraform destroy.
Terraform state files are just JSON files. state files are used to hold a one-to-one binding of the configured resources with remote objects. Terraform creates each object and records its identity in the state file.
It's not recomended to chaneg state file. you created a resource manually and want to use that resource in Terraform – then you would be required to create a configuration matching the resource and then import that resource in the state file using the terraform import cmdlet. In the same way, if you want Terraform to forget one of the objects, then you can use the terraform state rm cmdlet, which would remove that object from the state file.
Purpose state:
- Mapping to the real world
- Metadata
- Performance
It also stores a cache of all the attributes in the state file, which is one of the optional features of Terraform and is used only for performance improvements.
For small infrastructures, Terraform can easily get the latest attributes from all your resources, which is the default behavior of Terraform, to sync all resources in the state before performing any operations such as plan and apply.
For large enterprise infrastructures, Terraform can take many hours to query all the attributes of the resources, totally depending on the size of the infrastructure, and it may surprise you in terms of the total time it takes. Many cloud providers don't provide an API that can support querying multiples resources at once. So, in this scenario, many users set -refresh=false as well as the -target flag, and because of this, the cached state is treated as the record of truth, which helps Terraform to cut down the plan phase time by almost half.
- Syncing:
By default, Terraform stores the state file in the current working directory where you have kept your configuration file. If you have multiple team members working together on the same Terraform configuration code? Then, in that scenario, you just have to keep your Terraform state file in a remote state. Terraform uses remote locking so that only one person is able to run Terraform at a time. If at the same time other users try to run the Terraform code, then Terraform will throw an error saying that the state file is in a locked state. This feature of Terraform ensures that every time Terraform code is run, it should refer to the latest state file.
https://www.terraform.io/docs/language/settings/backends/index.html
- Local Backend
The terraform.tfstate file holds the state of the configuration. you can change the location of the local state file by using the -state=statefile command-line flag for terraform plan and terraform apply:
terraform apply -state=statefile
terraform workspace new development
terraform workspace list
terraform apply
tree
After creating a new workspace and running terraform plan and terraform apply, we can see that Terraform creates a terraform.tfstate.d directory and a subdirectory for each workspace. In each workspace directory, a new terraform.tfstate file got created.
- Remote backend https://www.terraform.io/docs/language/settings/backends/index.html HashiCorp defines two classes of backend:
- Standard: Includes state management and possibly locking
- Enhanced: Includes remote operations on top of standard features
terraform {
backend "azurerm" {
storage_account_name = "terraform-stg"
container_name = "tfstate"
key = "terraform.tfstate"
access_key = "tyutyutyutryuvsd68564568..."
}
}
https://www.terraform.io/docs/language/settings/backends/consul.html
The prefix -/+ means that Terraform will destroy and recreate the resource, rather than updating it in-place. While some attributes can be updated in-place (which are shown with the ~ prefix), changing the boot disk image for an instance requires recreating it. Terraform and the Google Cloud provider handle these details for you, and the execution plan makes it clear what Terraform will do.
By studying the resource attributes used in interpolation expressions, Terraform can automatically infer when one resource depends on another. In the example above, the reference to google_compute_address.vm_static_ip.address creates an implicit dependency on the google_compute_address named vm_static_ip.
Terraform uses this dependency information to determine the correct order in which to create and update different resources. In the example above, Terraform knows that the vm_static_ip must be created before the vm_instance is updated to use it.
Implicit dependencies via interpolation expressions are the primary way to inform Terraform about these relationships, and should be used whenever possible.
Sometimes there are dependencies between resources that are not visible to Terraform. The depends_on argument can be added to any resource and accepts a list of resources to create explicit dependencies for.
For example, perhaps an application you will run on your instance expects to use a specific Cloud Storage bucket, but that dependency is configured inside the application code and thus not visible to Terraform. In that case, you can use depends_on
to explicitly declare the dependency.