azure devops setup eks provisioning pipeline - devonfw/hangar GitHub Wiki

Table of Contents

Setting up a AWS EKS provisioning pipeline on Azure DevOps

In this section we will create a pipeline which will provision an AWS EKS cluster. This pipeline will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name eks-variables is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently.

The creation of the pipeline will follow the project workflow, so a new branch named feature/eks-provisioning will be created, the YAML file for the pipeline and the Terraform files for creating the cluster will be pushed to it.

Then, a Pull Request (PR) will be created in order to merge the new branch into the appropiate branch (provided in -b flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w flag.

The script located at /scripts/pipelines/azure-devops/pipeline_generator.sh will automatically create this new branch, create the EKS provisioning pipeline based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.

Prerequisites

  • Install the Terraform extension for Azure DevOps.

  • Create a service connection for connecting to an AWS account (as explained in the above Terraform extension link) and name it AWS-Terraform-Connection. If you already have a service connection available or you need a specific connection name, please update eks-pipeline.cfg accordingly.

  • A S3 Bucket. You can use an existing one or create a new one with the following command:

    aws s3 mb <bucket name>
    # Example: aws s3 mb s3://terraform-state-bucket
  • An AWS IAM user with required permissions to provision the EKS cluster.

  • This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with git pull).

Creating the pipeline using provided script

Before executing the pipeline generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit terraform.tfvars file or take advantage of the set-terraform-variables.sh script located at /scripts/environment-provisioning/aws/eks, which allows you to create or update values for the required variables, passing them as flags.

Example: creating a new VPC on cluster creation:

./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --vpc_name <vpc name> --vpc_cidr_block <vpc cidr block>
Example: reusing existing VPC and subnets:
./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --existing_vpc_id <vpc id> --existing_vpc_private_subnets <array of subnet ids>

Usage

pipeline_generator.sh \
  -c <config file path> \
  -n <pipeline name> \
  -d <project local path> \
  --cluster-name <cluster name> \
  --s3-bucket <s3 bucket name> \
  --s3-key-path <s3 key path> \
  [--aws-access-key <aws access key>] \
  [--aws-secret-access-key <aws secret access key>] \
  [--aws-region <aws region>] \
  [--rancher] \
  [-b <branch>] \
  [-w]
Note
The config file for the EKS provisioning pipeline is located at /scripts/pipelines/azure-devops/templates/eks/eks-pipeline.cfg.

Flags

-c, --config-file               [Required] Configuration file containing pipeline definition.
-n, --pipeline-name             [Required] Name that will be set to the pipeline.
-d, --local-directory           [Required] Local directory of your project (the path should always be using '/' and not '\').
    --cluster-name              [Required] Name for the cluster."
    --s3-bucket                 [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.
    --s3-key-path               [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.
    --aws-access-key            [Required, on first run] AWS account access key ID.
    --aws-secret-access-key     [Required, on first run] AWS account secret access key.
    --aws-region                [Required, on first run] AWS region for provisioning resources.
    --rancher                              Install Rancher to manage the cluster.
    --setup-monitoring                     Install logging and monitoring stack (Prometheus, Grafana, Alertmanager, Loki). Default: true.
-b, --target-branch                        Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w                                         Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.

Example

./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -n eks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --s3-bucket terraform-state-bucket --s3-key-path eks/state --rancher -b develop -w
Note
Rancher is installed on the cluster after provisioning when using the above command.

Appendix: Interacting with the cluster

First, generate a kubeconfig file for accessing the AWS EKS cluster:

aws eks update-kubeconfig --name <cluster name> --region <aws region>
Now you can use kubectl tool to communicate with the cluster.

To enable an IAM user to connect to the EKS cluster, please refer here.

To get the DNS name of the NGINX Ingress controller on the EKS cluster, run the below command:

kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress[0].hostname}

Rancher, if installed, will be available on https://<ingress controller domain>/dashboard. You will be asked for an initial password, which can be retrieved with:

kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\n"}}'

Appendix: Monitoring

In case of having chosen to install the logging and monitoring stack, refer to the corresponding documentation.

Appendix: Destroying the cluster

To destroy the provisioned resources, set operation pipeline variable value to destroy and run the pipeline.

Appendix: Rancher resources

⚠️ **GitHub.com Fallback** ⚠️