Creating a Kubernetes cluster with Istio on AWS using Kops - dpsp-summit/wiki GitHub Wiki
This guide will go through the process of creating a Kubernetes cluster with Istio on AWS using Kops. For this, you should have an AWS account.
The environment used for this tutorial is Ubuntu 16.04. Some commands may vary on other operating systems. You should also have Python (at least v3.5) required for the aws cli installation.
During this tutorial we will use several tools on the OS, here's the list of them and why they are needed:
- AWS cli (v1.16.180): the aws client used to interact with the aws account
- kubectl: kubernetes client to communicate with the Kubernetes cluster API
- kops (v1.12.1): helps on the setup of the Kubernetes cluster
- jq (v1.5): Json processor used to extract information from a console output
pip install awscli --upgrade --user
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
These commands will export your aws credentials as environment variables:
aws configure
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
aws iam create-group --group-name kops-poc
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops-poc
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops-poc
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops-poc
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops-poc
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops-poc
aws iam create-user --user-name kops-poc
aws iam add-user-to-group --user-name kops-poc --group-name kops-poc
aws iam create-access-key --user-name kops-poc
aws s3api create-bucket --bucket kops-poc.k8s.local --create-bucket-configuration LocationConstraint=us-west-2
export KOPS_STATE_STORE=s3://kops-poc.k8s.local
As long as the cluster name has the .k8s.local at the end of the name Kops will not use Public DNS (won't create a Route53 domain). ie: kops-poc.k8s.local
aws ec2 create-key-pair --key-name kp_devpoc_k8s | jq -r '.KeyMaterial' > kp_devpoc_k8s.pem
mv kp_devpoc_k8s.pem ~/.ssh/
chmod 400 ~/.ssh/kp_devpoc_k8s.pem
ssh-keygen -y -f ~/.ssh/kp_devpoc_k8s.pem > ~/.ssh/kp_devpoc_k8s.pub
Note:
Make sure you have JQ before you run the create key pair command. If you don't, you can install it with:
sudo apt-get install jq
export AWS_REGION=us-west-2
export NAME=kops-poc.k8s.local
export KOPS_STATE_STORE=s3://$NAME
kops create cluster \
--cloud aws \
--networking kubenet \
--name $NAME \
--master-size t2.medium \
--node-size t2.medium \
--zones us-west-2a \
--ssh-public-key ~/.ssh/kp_devpoc_k8s.pub \
--yes
It was used us-west-2 (Oregon) as the region to prevent confusion against other region configurations that may coexist on the same aws environment.
The cluster creation may take a few minutes. After that, the cluster can be validated using:
kops validate cluster
kubectl get nodes
Once the cluster is running, we can access the default Kubernetes service using the deployed load balancer. This service will prompt for a user and password, to get them we can run the following command:
kubectl config view --minify
And we should see some information about the cluster, including something similar to this:
users:
- name: kops-poc.k8s.local
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: TtWL8Asszbvao4slTOnEignKMNWHA45V
username: admin
With that, we can go to the load balancers list on amazon: EC2 -> Load Balancers. There should be a load balancer with a name that starts with "api-kops-poc-k8s-local", example:
api-kops-poc-k8s-local-6gg26c
We click on it, a get the DNS Name, it should be something like:
api-kops-poc-k8s-local-6gg26c-61521813.us-west-2.elb.amazonaws.com
Using that url on the browser and entering our user and password we should see a list of the cluster endpoints.
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.0 sh -
cd istio-1.2.0
We need to edit our cluster configurations:
kops edit cluster $NAME
We need to add some properties under the spec property:
spec:
kubeAPIServer:
admissionControl:
- NamespaceLifecycle
- LimitRanger
- ServiceAccount
- PersistentVolumeLabel
- DefaultStorageClass
- DefaultTolerationSeconds
- MutatingAdmissionWebhook
- ValidatingAdmissionWebhook
- ResourceQuota
- NodeRestriction
- Priority
Update the cluster
kops update cluster --yes
kops rolling-update cluster --yes
This update will take some time, usually between 5 and 10 minutes.
Ubuntu command:
sudo snap install helm --classic
Follow the tutorial from "Install with Helm and Tiller via helm install
" on the Istio documentation:
https://istio.io/docs/setup/kubernetes/install/helm/
Note: On step 5, we selected demo as our configuration profile since it comes with most of the components already installed.
We will need to add Istio injection for our default namespace:
kubectl label namespace default istio-injection=enabled
kubectl get namespace -L istio-injection
Inside the Istio installation sources:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Confirm all pods are running:
kubectl get pods
Define the ingress gateway for the application:
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Confirm the gateway is running
kubectl get gateway
Set gateway url:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Confirm the app is running:
curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
Or by using the url to access the brower, something like:
http://ae3498a4e969f11e9b53a02f515ffc3f-309381476.us-west-2.elb.amazonaws.com/productpage
To remove routing rules and deleting the application pods run:
samples/bookinfo/platform/kube/cleanup.sh
To test Prometheus we will need to enable port-forward
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o
jsonpath='{.items[0].metadata.name}') 9090:9090
And then test the url on the browser:
http://localhost:9090/graph?g0.range_input=1h&g0.expr=istio_request_bytes_count&g0.tab=0
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
Open in browser:
http://localhost:3000/d/1/istio-mesh-dashboard
kops delete cluster kops-poc.k8s.local --yes
Running Istio on AWS with Kops:
https://medium.com/@diego_pacheco/running-istio-on-aws-with-kops-43218829d45b
Create a High-Availability Kubernetes cluster on AWS with Kops:
https://www.poeticoding.com/create-a-high-availability-kubernetes-cluster-on-aws-with-kops/
Istio Installation Guide:
https://istio.io/docs/setup/kubernetes/install/helm/
Kops:
https://github.com/kubernetes/kops
Istio Sample App:
https://istio.io/docs/examples/bookinfo/