Running on a native Google (GCP) Kubernetes Clusters - phnmnl/phenomenal-h2020 GitHub Wiki
To run on a GCP native Kubernetes:
- Create k8s cluster from Google console or through CLI
- Go to the connect part on GCP console, it will provide a gcloud CLI command to setup kubectl to connect to the clusters
- Create following cluster-binding-role for helm (
kubectl create -f)<fileWithFollowingContent>:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
helm initon a machine where you have previously addedgalaxy-helm-repohelm repository.- You might need:
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
- Create single node file system on GCP: https://console.cloud.google.com/launcher/details/click-to-deploy-images/singlefs?project=phenomenal-gcp-testing&folder&organizationId checking that NFS serving is activated.
- More documentation here: https://cloud.google.com/launcher/docs/single-node-fileserver?hl=en_GB&_ga=2.9020275.-756831591.1515154233
- Once deployed, add the no-root-squash preference to /etc/exports (you will need to ssh into the machine through google console or cli, and sudo su).
- Create PV pointing to that NFS created:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
spec:
storageClassName: standard
capacity:
storage: 20Gi
# volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data
server: singlefs-1-vm
- Run our helm install process.