Terraform Introduction
Like Ansible is to wrap ssh/scp using python to realize operation on targets
Terraform is to wrap cloud restfulAPI for create vitural machine objects step by step
Terraform modules is like a function definition, it can call another module or a set of resource.
It also consists of Input variables are like function arguments.
and Output values are like function return values.
Locals variables are like a function's temporary local variables.
1. How to calling modules
module "example" {
source = "./modules/my_module"
ami = "ami-12345678"
name = "MyInstance"
}
2. How to define a modules
1) input variable
variable "ami" {description = "The AMI to use for the instance"
type = string }
variable "name" { description = "Name tag for the instance"
type = string }
2) return values
output "instance_id" {
description = "The ID of the created instance"
value = aws_instance.example.id
}
3) function implement (module or resource)
resource "aws_instance" "example" {
ami = var.ami
tags = {
Name = var.name
}
}
Terraform Installation
Terraform is an excellent tool for AWS manipulation, however
different version affect the syntax of code. So firstly,
we can install multiple version as following:
brew install warrensbox/tap/tfswitch
(curl -L https://raw.githubusercontent.com/warrensbox/terraform-switcher/release/install.sh | bash)
then
tfswitch to choose version
Terraform Tutorial on AWS
You can get tutorial I made on github.
1. EKS $kubernetes
https://github.com/hqzhang/awsekstest.git
2.Elastic IP #floating IP=public IP
https://github.com/hqzhang/eipwebtest.git
3.ELB with web # Load ballence
https://github.com/hqzhang/elbwebtest.git
4. Lambda # Lambdafunction
https://github.com/hqzhang/lambdatest.git
5.Auto-Scaling. # Scaling up/down
https://github.com/hqzhang/asgwebtest.git
6.ECS. # Docker Container
1)Finish and upload
Https://github.com/hqzhang/ecsalbtest.git
7. RDS # database with web
Https://github.com/hqzhang/webrdstest.git
8.
9.
10.
EKS Installation
EKS Installation
(https://learn.hashicorp.com/terraform/kubernetes/provision-eks-cluster)
0) Git code
git clone https://github.com/terraform-providers/terraform-provider-aws.git
dd terraform-provider-aws/examples/eks-getting-started
1) Execute code
Terraform plan
Terraform apply
2) Verify list eks
aws eks list-clusters
"clusters": [ "terraform-eks-demo"]
3) Create kube config and verify nodes
aws eks update-kubeconfig --name terraform-eks-demo
/Users/hongqizhang/.kube/config
Verify Nodes
kubectl get nodes
4) Install metric server
wget -O v0.3.6.tar.gz https://codeload.github.com/kubernetes-sigs/metrics-server/tar.gz/v0.3.6
tar -xzf v0.3.6.tar.gz
kubectl apply -f metrics-server-0.3.6/deploy/1.8+/
5) Install dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
6) Set kube proxy
kubectl proxy
7) Dashboard UI
http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
8) Auth login token
kubectl apply -f https://raw.githubusercontent.com/hashicorp/learn-terraform-provision-eks-cluster/master/kubernetes-dashboard-admin.rbac.yaml
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep service-controller-token | awk '{print $1}')
eyJhbGciOiJSUzI1NiIsImtpZCI6InJsZFQzQ3FscDM4NXVoVl9VOWZNYlplUTRXdG5yN2g2NzViVHBMMGIzOHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJzZXJ2aWNlLWNvbnRyb2xsZXItdG9rZW4tbXpzNzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoic2VydmljZS1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTM3ODM3OTEtYjRkNi00MTUxLWExYTMtOWE1YTBiODllZWEwIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOnNlcnZpY2UtY29udHJvbGxlciJ9.rKXpr3MHqeg2DJkD4C9O4IgktLf5sLm0XKDuGM1oJXa1qoB9yK6Svs5YYDAixsqzzul-BP98gL6K8qRxHi9ck8CzQwm41BxDZwu0JTitAEYD_jqruR1UUfu-bUyvVNcRSDFdz-Dwo00yfjFJc3bcQ-PDA0Y4N2zAecfBJqWTw6TV7ctKJgfAdpUD79rDL9NlKJlLAE58779FBHdPFOX4HXG_2-Zz822L9DDmNhNW9t9xPg9t2v4nShypSoAHHuB1pbgTtTgbAwW7FI-JnULIOWT0tD2IH_qs5ynd93_bL6tBiFKhN3XjgJ_ZXDfme_OH-ciKg_umls6dO5iBjNF6-w
9) login dashbaord with token
ECS Deployment
0) Conception
Task Definition — This a blueprint that describes how a docker container should launch
Task — This is a running container with the settings
Service — Defines long running tasks of the same Task Definition. This can be 1 running container or multiple running containers all using the same Task Definition.
Cluster — A logic group of EC2 instances
1) Git code
git clone https://github.com/terraform-providers/terraform-provider-aws.git
Cd terraform-provider-aws/examples/ecs-alb/
2) Execute code
terraform plan \
-var admin_cidr_ingress='"10.0.0.0/32"' \
-var key_name=mykeypair
terraform destroy \
-var admin_cidr_ingress='"10.0.0.0/32"' \
-var key_name=mykeypair
Outputs:
asg_name = tf-test-asg
elb_hostname = tf-example-alb-ecs-1134876372.us-west-2.elb.amazonaws.com
instance_security_group = sg-009e7deceabd4f311
launch_configuration = terraform-20200519024351237700000001
3) Verification
aws ecs list-clusters
aws ecs list-services --cluster terraform_example_ecs_cluster
aws ecs list-tasks --cluster terraform_example_ecs_cluster
4) Access app
Curl http://tf-example-alb-ecs-1134876372.us-west-2.elb.amazonaws.com/
Ghost app
Terraform Resource(Object)
## ASG
resource "aws_autoscaling_group"-->resource "aws_launch_configuration"
-->resource "aws_elb"
## RDS
resource "aws_db_instance"-->resource "aws_db_subnet_group"
## ELB
aws_lb_cookie_stickiness_policy-->resource "aws_elb"-->resource "aws_instance"
## VPC
resource "aws_route_table_association"-->resource "aws_route_table"-->resource "aws_internet_gateway"
-->resource "aws_subnet"-->resource "aws_vpc"
## ECS
resource "aws_ecs_service"-->resource "aws_ecs_task_definition"-->data "template_file"
-->resource "aws_ecs_cluster"
## IAM
resource "aws_iam_instance_profile"-->resource "aws_iam_role"
resource "aws_iam_role_policy"-->resource "aws_iam_role"
## Security
resource "aws_security_group"
## ALB
resource "aws_alb_listener" -->resource "aws_alb_target_group"-->resource "aws_vpc"
-->resource "aws_alb"-->resource "aws_security_group"
## CloudWatch Logs
resource "aws_cloudwatch_log_group"
## STORAGE
resource "aws_volume_attachment"-->resource "aws_ebs_volume"
-->resource "aws_instance"
resource "aws_s3_bucket_object"-->resource "aws_s3_bucket"
## API gateway
resource "aws_api_gateway_deployment"-->resource "aws_api_gateway_rest_api"
---resource "aws_api_gateway_integration"
## Lambda
resource "aws_lambda_function"-->data "archive_file"
resource "aws_api_gateway_integration"
resource "aws_api_gateway_method"
resource "aws_api_gateway_resource"
Alexa is cloud based voice system.
AWS Cognito is a user account control service
AWS alb for app load balancer
RDS for rational da
tabase service
Resource "aws_vpc"
Resource "aws_subnet"-->Resource "aws_vpc"
Resource "aws_internet_gateway"-->Resource "aws_vpc"
Resource "aws_route_table"-->Resource "aws_internet_gateway
Resource "aws_route"-->Resource "aws_internet_gateway"
Terraform Keywords
#0. Variable define variable
variable "control_count" { default = 1 }
Usage: "${var.control_count}"
#1.Provider define provider
provider "aws" {
region = "${var.region}"
}
Usage:no
#2.variable define variable
variable "control_count" { default = 1 }
Usage: "${var.control_count}"
#3.resource declare and define object
resource "aws_route53_zone" "primary" {
name = "wavecloud.com"
}
Usage:"${aws_route53_zone.primary.zone_id}"
#4.output define variable with print
output "aws_hosted_zone" {
value = "${aws_route53_zone.primary.zone_id}"
}
Usage:Usage: "${aws_hosted_zone}"
#5 Data define struct of variable
data "template_file" "cloud_config" {
template = "${file("${path.module}/cloud-config.yml")}"
#6. Module define code reuse
module "ssh-key" {
source ="./terraform/aws/ssh"
short_name = "${var.short_name}"
}
subdirectory:
resource "aws_key_pair" "deployer" {
key_name = "key-${var.short_name}"
public_key = "${file(var.ssh_key)}"
}
Usage: module.ssh-key.aws_key_pair.deployer
#Terraform Item List
├── hostedzone
│ └── main.tf
├── iam
│ └── main.tf
├── instance
│ └── main.tf
├── route53
│ └── dns
│ └── main.tf
├── security_groups
│ └── main.tf
├── ssh
│ ├── main.tf
│ ├── terraform.tfstate
│ └── terraform.tfstate.backup
├── terraform.tfstate
├── terraform.tfstate.backup
└── vpc
└── main.tf
Terraform Instance Items
#1. route53_zone
resource "aws_route53_zone" "primary" {
name = "${var.dns_domain}"
}
#2. ssh-key
variable "ssh_key" {default = "~/.ssh/id_rsa.pub"}
resource "aws_key_pair" "deployer" {
short_name = "${var.short_name}"
key_name = "key-${var.short_name}"
public_key = "${file(var.ssh_key)}"
}
output "ssh_key_name" {
value = "${aws_key_pair.deployer.key_name}"
}
#3. router53
resource "aws_route53_record" "dns-control" {
count = "${var.control_count}"
zone_id = "${var.hosted_zone_id}"
records = ["${element(split(",", var.control_ips), count.index)}"]
name = "${var.short_name}-control-${format("%02d", count.index+1)}.node.${var.domain}"
type = "A"
ttl = 60
}
# group records
resource "aws_route53_record" "dns-control-group" {
count = "${var.control_count}"
zone_id = "${var.hosted_zone_id}"
name = "${var.control_subdomain}${var.subdomain}.${var.domain}"
records = ["${split(",", var.control_ips)}"]
type = "A"
ttl = 60
}
output "control_fqdn" {
value = "${join(",", aws_route53_record.dns-control.*.fqdn)}"
}
#4. network vpc
resource "aws_vpc" "main" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags = {
Name = "${var.long_name}"
KubernetesCluster = "${var.short_name}"
}
}
resource "aws_subnet" "main" {
vpc_id = "${aws_vpc.main.id}"
count = "${length(split(",", var.availability_zones))}"
cidr_block = "${lookup(var.cidr_blocks, "az${count.index}")}"
availability_zone = "${var.region}${element(split(",", var.availability_zones), count.index)}"
tags = {
Name = "${var.long_name}"
KubernetesCluster = "${var.short_name}"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = "${aws_vpc.main.id}"
tags = {
Name = "${var.long_name}"
KubernetesCluster = "${var.short_name}"
}
}
resource "aws_route_table" "main" {
vpc_id = "${aws_vpc.main.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.main.id}"
}
tags = {
Name = "${var.long_name}"
KubernetesCluster = "${var.short_name}"
}
}
resource "aws_main_route_table_association" "main" {
vpc_id = "${aws_vpc.main.id}"
route_table_id = "${aws_route_table.main.id}"
}
resource "aws_route_table_association" "main" {
count = "${length(split(",", var.availability_zones))}"
subnet_id = "${element(aws_subnet.main.*.id, count.index)}"
route_table_id = "${aws_route_table.main.id}"
}
output "availability_zones" {
value = "${join(",",aws_subnet.main.*.availability_zone)}"
}
output "subnet_ids" {
value = "${join(",",aws_subnet.main.*.id)}"
}
output "default_security_group" {
value = "${aws_vpc.main.default_security_group_id}"
}
output "vpc_id" {
value = "${aws_vpc.main.id}"
}
#5. iam profile
resource "aws_iam_instance_profile" "control_profile" {
name = "${var.short_name}-control-profile"
role = "${aws_iam_role.control_role.name}"
}
resource "aws_iam_role_policy" "control_policy" {
name = "${var.short_name}-control-policy"
role = "${aws_iam_role.control_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["elasticloadbalancing:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["route53:*"],
"Resource": ["*"]
}
]
}
EOF
}
resource "aws_iam_role" "control_role" {
name = "${var.short_name}-control-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
output "control_iam_instance_profile" {
value = "${aws_iam_instance_profile.control_profile.name}"
}
#6. secutiry group
resource "aws_security_group" "control" {
name = "${var.short_name}-control"
description = "Allow inbound traffic for control nodes"
vpc_id = "${var.vpc_id}"
tags = {
KubernetesCluster = "${var.short_name}"
}
ingress { # SSH
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # Mesos
from_port = 5050
to_port = 5050
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # Marathon
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # Chronos
from_port = 4400
to_port = 4400
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # Consul
from_port = 8500
to_port = 8500
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # ICMP
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # Consul
from_port = 8500
to_port = 8500
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress { # ICMP
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "control_security_group" {
value = "${aws_security_group.control.id}"
}
#7. instance
resource "aws_ebs_volume" "ebs" {
availability_zone = "${element(split(",", var.availability_zones), count.index)}"
#count = "${var.count}"
size = "${var.data_ebs_volume_size}"
type = "${var.data_ebs_volume_type}"
tags = {
Name = "${var.short_name}-${var.role}-lvm-${format(var.count_format, count.index+1)}"
KubernetesCluster = "${var.short_name}"
}
}
resource "aws_instance" "instance" {
ami = "${var.source_ami}"
instance_type = "${var.ec2_type}"
count = "${var.count}"
vpc_security_group_ids = [ "${split(",", var.security_group_ids)}"]
key_name = "${var.ssh_key_pair}"
associate_public_ip_address = true
subnet_id = "${element(split(",", var.vpc_subnet_ids), count.index)}"
iam_instance_profile = "${var.iam_profile}"
root_block_device {
delete_on_termination = true
volume_size = "${var.ebs_volume_size}"
volume_type = "${var.ebs_volume_type}"
}
tags {
Name = "${var.short_name}-${var.role}-${format(var.count_format, count.index+1)}"
sshUser = "${var.ssh_username}"
role = "${var.role}"
dc = "${var.datacenter}"
KubernetesCluster = "${var.short_name}"
}
}
resource "aws_volume_attachment" "instance-lvm-attachment" {
count = "${var.count}"
device_name = "xvdh"
instance_id = "${element(aws_instance.instance.*.id, count.index)}"
volume_id = "${element(aws_ebs_volume.ebs.*.id, count.index)}"
force_detach = true
}
output "hostname_list" {
value = "${join(",", aws_instance.instance.*.tags.Name)}"
}
output "ec2_ids" {
value = "${join(",", aws_instance.instance.*.id)}"
}
output "ec2_ips" {
value = "${join(",", aws_instance.instance.*.public_ip)}"
}