gke eks multi cloud small top 1.4 ahr profile quick start - apigee/ahr GitHub Wiki
Multi-Cloud GKE/EKS Quick Start
This walkthrough allows you install a multi-cloud topology of Apigee Hybrid. The required infrastructure is provisioned. Clusters are provisioned. Hybrid runtime is installed.
All required configuration files and scripts are located in the examples-mc-gke-eks directory of the ahr repo
https://github.com/apigee/ahr/tree/main/examples-mc-gke-eks.
We are using terraform modules to install
- GKE and AWS VPCs as well as configure VPN peering connection between them
- we then overlay private networks for our Hybrid clusters and firewall rules, routes, and security groups to enable connectivity for Cassandra 7001,7000 gossip ports for internode communication.
We then use curl to send a request to a GCP cluster creation API with a json datagram that describes GKE private cluster
We use ClusterConfig manifest with eksctl command to create an EKS cluster.
We then install Apigee Hybrid into a GKE cluster.
Then a sample ping proxy is deployed into our Hybrid org.
Finally, we install an Apigee hybrid runtime into a EKS cluster while extending Cassandra ring across the clouds using a seed node.
Install Steps
?. Have a GCP project and an AWS project ready.
?. Open Cloud Shell of your GCP project.
?. Populate GCP project and os login username.
This example is for Qwiklabs. Change for your project as appropriate.
export PROJECT=$(gcloud projects list|grep qwiklabs-gcp|awk '{print $1}')
export GCP_OS_USERNAME=$(gcloud config get-value account | awk -F@ '{print $1}' )
?. Populate AWS Credentials variables
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_REGION=us-east-1
?. Clone Ahr repo and define Ahr variables
export AHR_HOME=~/ahr
cd ~
git clone https://github.com/apigee/ahr.git
?. Define HYBRID_HOME
export HYBRID_HOME=~/apigee-hybrid-multicloud
mkdir -p $HYBRID_HOME
cp -R $AHR_HOME/examples-mc-gke-eks/. $HYBRID_HOME
?. Run install
WARNING: Install takes around 40 minutes. If you are using Cloud Shell (which by design is meant for an interactive work only), make sure you keep your install session alive, as CloudShell has an inactivity timeout. For details, see: https://cloud.google.com/shell/docs/limitations#usage_limits
cd $HYBRID_HOME
./install.sh |& tee mc-install-`date -u +"%Y-%m-%dT%H:%M:%SZ"`.log
Next Steps
?. Attach your EKS cluster to Anthos getting a token
CLUSTER_SECRET=$(kubectl --context cluster-eks get serviceaccount anthos-user -o jsonpath='{$.secrets[0].name}')
kubectl --context cluster-eks get secret ${CLUSTER_SECRET} -o jsonpath='{$.data.token}' | base64 --decode
?. Look around the $HYBRID_HOME directory.
?. To init a session with your project variables
source $HYBRID_HOME/source.env
?. To find ip addresses of your jumpboxes,
cd $HYBRID_HOME/gcp-aws-vpc-infra-tf
terraform output
?. You can also define environment values with IP addresses
source <(terraform output |awk '{printf( "export %s=%s\n", toupper($1), $3)}')
echo $AWS_JUMPBOX_IP
echo $GCP_JUMPBOX_IP
?. To log in into GCP jumpbox
gcloud compute ssh vm-gcp --ssh-key-file ~/.ssh/id_gcp --zone $ZONE
?. Alternatively, to log in into GCP jumpbox
ssh $GCP_OS_USERNAME@$GCP_JUMPBOX_IP -i ~/.ssh/id_gcp
?. To log in into AWS jumpbox
ssh ec2-user@$AWS_JUMPBOX_IP -i ~/.ssh/id_aws
?. Your configuration files are related like here:
Test Requests
Right now there is no GTL in front of two cluster services. That is left as a homework for the reader.
?. For GKE cluster there is an external load balancer configured. To execute a request against a ping proxy deployed into the GKE hybrid instance
curl --cacert $RUNTIME_SSL_CERT https://$RUNTIME_HOST_ALIAS/ping -v --resolve "$RUNTIME_HOST_ALIAS:443:$RUNTIME_IP" --http1.1
?. For AWS cluster, there is an internal network load balancer configured. Make a note of the EXTERNAL-IP
field value.
kubectl --context cluster-eks get services -n istio-system istio-ingressgateway
?. Login into your AWS jumpbox, then find a RUNTIME_IP value by pinging an FQDN of the internal nlb. By pinging it repeatably, you will discover all three IP addresses
ping <fqdn-of-the-service-EXTERNAL-IP-field-for-ingress-gateway-service>
?. To execute a request to the ping proxy, define your helper environment variables and execute a curl command
export RUNTIME_HOST_ALIAS=
export RUNTIME_IP=
curl -k https://$RUNTIME_HOST_ALIAS/ping -v --resolve "$RUNTIME_HOST_ALIAS:443:$RUNTIME_IP" --http1.1
Multi-cloud connectivity check between Kubernetes containers
?. In a 'gke' terminal session
kubectl --context $R1_CLUSTER run -i --tty busybox --image=busybox --restart=Never -- sh
?. In an 'eks' terminal session
kubectl --context $R2_CLUSTER run -i --tty busybox --image=busybox --restart=Never -- sh
?. In each box
hostname -i
?. In a box, start an nc-based server. You always can use nc -v 10.4.0.111 7000
, but this one is more satisfying.
while true ; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l -p 7001 ; done
?. In the opposite box
nc -v 10.4.0.76 7001
?. You can delete busybox containers now
kubectl --context $R1_CLUSTER delete pod busybox
kubectl --context $R2_CLUSTER delete pod busybox