deploy ec2 using terraform - juancamilocc/virtual_resources GitHub Wiki
In this guide, you will learn how to deploy an EC2 instance in AWS using Terraform, leveraging S3 as a backend to store the deployment state enabling state locking and prevent conflicts in environments with multiple collaborators.
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update && sudo apt-get install terraform
terraform --version
# Terraform v1.10.5
# on linux_amd64
sudo apt-get install awscli
# aws --version
# aws-cli/1.22.34 Python/3.10.12 Linux/6.8.0-52-generic botocore/1.23.34
With Terraform and AWS-CLI configured, let's create an IAM user with limited permissions. Navigate to IAM > Users > Create User
. In Set permissions
select Attach policies directly
and search AmazonEC2FullAccess and AmazonS3FullAccess
to attach them.
In Retrieve access keys
, store the Access key
and Secret access key
securely.
NOTE: You must store these keys securely. Once you close the window where they are displayed, you cannot retrieve the secret access key again.
Now, authenticate using AWS CLI.
aws configure --profile user-test
# AWS Access Key ID [None]: <YOUR_ACCESS_KEY>
# AWS Secret Access Key [None]: <YOUR_SECRET_ACCESS_KEY>
# Default region name [None]: <YOUR_REGION>
# Default output format [None]: json
Alternatively, you can manually configure the credentials file at .aws/credentials
, as follows.
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
region = AWS_REGION
Verify authentication.
aws sts get-caller-identity --profile user-test
# {
# "UserId": "...",
# "Account": "...",
# "Arn": "arn:aws:iam::<YOUR_ACCOUNT_ID>:user/user-test"
# }
NOTE: If you want to avoid use --profile <user>
flag always, you can define as a default user, as follows.
echo 'export AWS_PROFILE=user-test' >> ~/.bashrc
source ~/.bashrc
We need to create a S3 with versioning enabled to store the Terraform state file. We will use our id user to ensure the S3 has a unique name.
aws s3 mb s3://terraform-backend-$(aws sts get-caller-identity --query "Account" --output text)
# make_bucket: terraform-backend-<YOUR_AWS_ID>
aws s3api put-bucket-versioning --bucket terraform-backend-<YOUR_AWS_ID> --versioning-configuration Status=Enabled
For our project, we will manage the following folder structure.
├── backend.tf
├── variables.tf
├── main.tf
├── modules
│ └── ec2
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
2 directories, 6 files
Define the backend to manage our infrastrucutre state, as backend.tf
.
terraform {
backend "s3" {
bucket = "terraform-backend-<YOUR_AWS_ID>"
key = "terraform.tfstate"
region = "<YOUR_AWS_REGION>"
encrypt = true
use_lockfile = true
}
}
Let's check the connection with S3 as backend, initializing Terraform.
terraform init
# Initializing the backend...
# Successfully configured the backend "s3"! Terraform will automatically
# use this backend unless the backend configuration changes.
# Initializing provider plugins...
# Terraform has been successfully initialized!
# You may now begin working with Terraform. Try running "terraform plan" to see
# any changes that are required for your infrastructure. All Terraform commands
# should now work.
# If you ever set or change modules or backend configuration for Terraform,
# rerun this command to reinitialize your working directory. If you forget, other
# commands will detect it and remind you to do so if necessary.
Define, main.tf
, as follows.
provider "aws" {
region = var.aws_region
profile = var.aws_profile
}
module "ec2" {
source = "./modules/ec2"
ami_id = var.ami_id
instance_type = var.instance_type
instance_name = var.instance_name
}
And variables.tf
, as follows.
variable "aws_region" {
description = "AWS region"
type = string
default = "<YOUR_AWS_REGION>"
}
variable "aws_profile" {
description = "AWS profile"
type = string
default = "user-test"
}
variable "ami_id" {
description = "ID AMI for EC2 instance"
type = string
default = "ami-0e1bed4f06a3b463d"
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "instance_name" {
description = "EC2 instance name"
type = string
default = "Instance-ec2"
}
Now, let's create the files for our EC2 module, as follows.
for modules/ec2/main.tf
resource "aws_instance" "example" {
ami = var.ami_id
instance_type = var.instance_type
tags = {
Name = var.instance_name
}
}
for modules/ec2/variables.tf
variable "ami_id" {
description = "AMI ID for EC2 instance"
type = string
}
variable "instance_type" {
description = "EC2 instance type"
type = string
}
variable "instance_name" {
description = "EC2 name"
type = string
}
for modules/ec2/outputs.tf
output "ec2_instance_id" {
value = aws_instance.example.id
}
Make an init and plan commands.
terraform init
terraform plan
# Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
# + create
# Terraform will perform the following actions:
# # aws_instance.example will be created
# + resource "aws_instance" "example" {
# .
# .
# .
# + root_block_device (known after apply)
# }
# Plan: 1 to add, 0 to change, 0 to destroy.
Apply the plan.
terraform apply -auto-approve
Lets' check the EC2 instance was created.
And check the state file in the S3 bucket.
Finally, we can destroy our infrastructure, as follows.
terraform destroy -auto-approve
# .
# .
# .
# aws_instance.example: Destroying... [id=i-07ce81e1c6b670041]
# aws_instance.example: Still destroying... [id=i-07ce81e1c6b670041, 10s elapsed]
# aws_instance.example: Still destroying... [id=i-07ce81e1c6b670041, 20s elapsed]
# aws_instance.example: Still destroying... [id=i-07ce81e1c6b670041, 30s elapsed]
# aws_instance.example: Still destroying... [id=i-07ce81e1c6b670041, 40s elapsed]
# aws_instance.example: Destruction complete after 45s
# Destroy complete! Resources: 1 destroyed.
Using S3 as a backend for Terraform provides a reliable and scalable way to manage infrastructure state, ensuring consistency across deployments. Additionally, by structuring the Terraform configuration into modules, we improve maintainability and reusability, making it easier to manage resources in a clean and modular way.