Install Needed Plugins for OSM CNF Integration - caprivm/virtualization GitHub Wiki
caprivm ([email protected])
This page shows how to install the needed plugins to the Kubernetes cluster added to OSM. The installation and configuration tasks were done on a server that serve as deployment machine with the following characteristics:
Feature | Value |
---|---|
OS Used | Ubuntu 18.04 LTS |
vCPU | 2 |
RAM (GB) | 4 |
Disk (GB) | 50 |
Home user | ubuntu |
The contents of the page are:
Before executing the step-by-step of this guide, it is important that in the deployment-machine
that you have to install the cluster, you have the cluster management tools installed:
In order for the pKubernetes cluster to be integrated with OSM, it is necessary that you have some plugins
available.
The cluster needs to have kube-flannel
for its operations:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
After applying the configuration, check the status of the Flannel
pods:
kubectl get pods -A | grep flan
# kube-system kube-flannel-ds-8gzbg 1/1 Running 0 2d7h
# kube-system kube-flannel-ds-jnfz7 1/1 Running 0 2d7h
# kube-system kube-flannel-ds-tvdtn 1/1 Running 0 2d7h
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers:
kubectl edit configmap -n kube-system kube-proxy
And set:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
Now:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Next, create the allocable IPs for load-balancer service:
mkdir ~/metallb && cd ~/metallb
sudo vi metallb_config.yaml
The metallb_config.yaml
file must contain:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.0.100.230-10.0.100.240
# Config your IP Range
Next:
kubectl apply -f ~/metallb/metallb_config.yaml
You can verify the status of the MetalLB service with:
kubectl get pods -n metallb-system
# NAME READY STATUS RESTARTS AGE
# controller-64f86798cc-8bxxs 1/1 Running 0 2d7h
# speaker-lgsdv 1/1 Running 0 2d7h
# speaker-th2sh 1/1 Running 0 2d7h
# speaker-v7jf2 1/1 Running 0 2d7h
Other configuration you need for your kubernetes cluster is the creation of the default storageClass
. A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following manifest:
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v2.9.0/k8s/openebs-operator.yaml
Wait for running pods: kubectl get pods -n openebs
. Next, tag OpenEBS as default storageClass
:
kubectl get storageclass
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
# openebs-device openebs.io/local Delete WaitForFirstConsumer false 2d7h
# openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 2d7h
# openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 2d7h
# openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 2d7h
Until now, there is not default storageclass defined. With the command below we will define openebs-hostpath as default storageClass
:
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
To check the right application of the storageClass
definition, we can use the following command:
kubectl get storageclass
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
# openebs-device openebs.io/local Delete WaitForFirstConsumer false 2d7h
# openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 2d7h
# openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 2d7h
# openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 2d7h
Finally, for Kubernetes clusters > 1.15 there is needed special permission of Tiller that can be added by the following command:
kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default