EKS‐learning Document. - ashish2706/EKS GitHub Wiki
EKS
This repository is for EKS learning path
Step-1: Introduction
Understand about EKS Core Objects
- Control Plane -->>>
- Worker Nodes & Node Groups --->>
- Fargate Profiles
- VPC
Create EKS Cluster Associate EKS Cluster to IAM OIDC Provider Create EKS Node Groups Verify Cluster, Node Groups, EC2 Instances, IAM Policies and Node Groups
Step-01: Create EKS Cluster using eksctl
++++++++++++++ Create EKS Cluster & Node Groups ++++++++++++++++++++
eksctl create cluster --name=cluster-semaphore
--region=us-east-1
--zones=us-east-1a,us-east-1b
--without-nodegroup
--full-ecr-access
--asg-access
-—external-dns-access
[+] https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html#_step_2_create_cluster
Run: eksctl create cluster —-help for other options to explore with the eksctl create call
Get List of clusters
eksctl get cluster
Step-02: Create & Associate IAM OIDC Provider for our EKS Cluster
To use AWS Identity and Access Management (IAM) roles for service accounts, an IAM OIDC provider must exist for your cluster’s OIDC issuer URL. Create OIDC provider using (eksctl).
Template
eksctl utils associate-iam-oidc-provider
--region region-code
--cluster
--approve
Replace with region & cluster name
eksctl utils associate-iam-oidc-provider
--region us-east-1
--cluster cluster-semaphore
--approve
Step-03: Create EC2 Keypair and use this in creating a node group
Create a new EC2 Keypair with name as kube-demo This keypair we will use it when creating the EKS NodeGroup. This will help us to login to the EKS Worker Nodes using Terminal.
create a node group
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, automatically update, or terminate nodes for your cluster with a single operation. Node updates and terminations automatically drain nodes to ensure that your applications stay available. Run below command to check all possible options available with create nodegroup. eksctl create nodegroup --help
Create Public Node Group
eksctl create nodegroup --cluster=cluster-semaphore
--region=us-east-1
--name=semaphore-nodegroup
--node-type=t3.medium
--nodes=2
--nodes-min=2
--nodes-max=4
--node-volume-size=20
--ssh-access
--ssh-public-key=kube-demo
--managed
--asg-access
--external-dns-access
--full-ecr-access
--appmesh-access
--alb-ingress-access
List EKS clusters
eksctl get cluster
List NodeGroups in a cluster
eksctl get nodegroup --cluster=
Node group is a kind of Autoscaling group in EKS service and its capacity may change depending on the usage in EKS cluster. Hence you may need to update the capacity of nodegroup next time when you come for execution.
eksctl scale nodegroup --name= --nodes= --nodes-max=4 --nodes-min=2 --cluster= eksctl scale nodegroup --name=semaphore3-nodegroup --nodes=2 --nodes-max=4 --nodes-min=2 --cluster=cluster-semaphore3
List Nodes in current kubernetes cluster
kubectl get nodes -o wide
Our kubectl context should be automatically changed to new cluster
kubectl config view --minify
Verify Worker Node IAM Role and list of Policies Go to Services -> EC2 -> Worker Nodes
Click on IAM Role associated to EC2 Worker Nodes. Verify Security Group Associated to Worker Nodes Go to Services -> EC2 -> Worker Nodes Click on Security Group associated to EC2 Instance which contains remote in the name.
Verify CloudFormation Stacks Verify Control Plane Stack & Events Verify NodeGroup Stack & Events
Login to Worker Node using Keypai kube-demo
Login to worker node
ssh -i kube-demo.pem ec2-user@