Deploy Magma Orchestrator using Helm - caprivm/virtualization GitHub Wiki

caprivm ([email protected])

Description

This page explains all the steps required to deploy the orchestrator on Kubernetes using Helm. It is very important to build the container images of the orchestrator using Docker to have the images that will later be used during the deployment.

All steps that expose this section has been tested in a Deployment Machine which reach the Kubernetes cluster, using the following requirements.

Feature Value
OS Used Ubuntu 18.04 LTS
vCPU 2
RAM (GB) 4
Disk (GB) 50
Home user ubuntu
Magma version v1.6
Kubernetes namespace magma

The contents of the page are:

Prerequisites

Before starting this guide, you should have installed the following tools. You can check the adjacent links if you haven't already:

Install the Helm push plugin:

helm plugin install https://github.com/chartmuseum/helm-push.git

NOTE: If you do not have a Kubernetes cluster available, you can create one using the next documentation:

Environment Variables

In this case, consider the next environment variables before continue the procedure:

export MAGMA_ROOT=~/magma_v1.6
export MAGMA_TAG=v1.6
export GITHUB_REPO=magma-charts
export GITHUB_REPO_URL=<your_gitlab_repo_url>
export GITHUB_USERNAME=<your_gitlab_username>
export GITHUB_ACCESS_TOKEN=<your_gitlab_access_token>
export GITHUB_PROJECT_ID=<your_gitlab_project_id>
export REGISTRY=registry.gitlab.com/everis_factory/fbc/magma
export REGISTRY_USERNAME=<your_registry_username>
export REGISTRY_ACCESS_TOKEN=<your_registry_access_token>

Build the orchestrator Helm Charts

Reference: Magma documentation section.

Orchestrator Helm charts will be built, as well as publish them to a GitLab packages repo.

sudo apt update && sudo apt -y upgrade && sudo apt -y install git
cd && git clone https://github.com/magma/magma.git $MAGMA_ROOT
cd $MAGMA_ROOT
git checkout $MAGMA_TAG
# Verify the branch:
git branch
# master
# * v1.6

Go to the $MAGMA_ROOT/orc8r/tools/helm/package.sh script and modify the next lines:

@@ -69,7 +69,7 @@
# Set up repo for charts
mkdir -p ~/magma-charts && cd ~/magma-charts
-git init
+# git init

@@ -106,14 +106,16 @@
# Push charts
-git add . && git commit -m "orc8r charts commit for version $ORC8R_VERSION"
-git config remote.origin.url >&- || git remote add origin $GITHUB_REPO_URL
-git push -u origin master
+# git add . && git commit -m "orc8r charts commit for version $ORC8R_VERSION"
+# git config remote.origin.url >&- || git remote add origin $GITHUB_REPO_URL
+# git push -u origin master

# Ensure push was successful
-helm repo add $GITHUB_REPO --username $GITHUB_USERNAME --password $GITHUB_ACCESS_TOKEN \
-      "https://raw.githubusercontent.com/$GITHUB_USERNAME/$GITHUB_REPO/master/"
+# helm repo add $GITHUB_REPO --username $GITHUB_USERNAME --password $GITHUB_ACCESS_TOKEN \
+#       "https://raw.githubusercontent.com/$GITHUB_USERNAME/$GITHUB_REPO/master/"
+helm repo add --username $GITHUB_USERNAME --password $GITHUB_ACCESS_TOKEN $GITHUB_REPO https://gitlab.com/api/v4/projects/$GITHUB_PROJECT_ID/packages/helm/stable
helm repo update
+for chart in `ls -1 | grep tgz`; do helm push $chart $GITHUB_REPO; done

# The helm command returns 0 even when no results are found. Search for err str instead

These changes are sufficient to generate the Helm packages. Run:

$MAGMA_ROOT/orc8r/tools/helm/package.sh -d all
# Uploaded orc8r charts successfully.

You can verify the upload of your packages to the GitHub repository as shown in the following image:

Uploaded packages

The package.sh script generates the files: index.yaml, orc8r-x.tgz, lte-orc8r-x.tgz, feg-orc8r-x.tgz and another packages in the same folder to facilitate the installation process.

Generate the Secrets

The first and perhaps most important thing is to generate the certificates in the correct way. Verify what is the search path and name server used in one of the Kubernetes on-premise nodes:

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -ti dnsutils -- cat /etc/resolv.conf
# search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
# nameserver 10.0.0.10
# options ndots:5

In both cases, the local domain in which the applications will be created is: <namespace_name>.svc.cluster.local. The namespace that we are going to use is magma.

NOTE: If you have external DNS, you have to configure Kubernetes to use that DNS, and the domain will most likely change, so you will need to consider that domain for the next steps.

Assemble the Certificates

First, create a local directory to hold the certificates you will use for your Orchestrator deployment.

mkdir -p $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/certs
cd $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/certs
# Generate the controller certificates.
${MAGMA_ROOT}/orc8r/cloud/deploy/scripts/self_sign_certs.sh magma.svc.cluster.local
# Generate the application certificates.
${MAGMA_ROOT}/orc8r/cloud/deploy/scripts/create_application_certs.sh magma.svc.cluster.local
# Generate the NMS certificate.
sudo openssl req -nodes -new -x509 -batch -keyout nms_nginx.key -out nms_nginx.pem -subj "/CN=*.magma.svc.cluster.local"
# create the admin_operator.pfx file, protected with a password of your choosing.
openssl pkcs12 -export -inkey admin_operator.key.pem -in admin_operator.pem -out admin_operator.pfx
# Enter Export Password:
# Verifying - Enter Export Password:

The certs directory should now look like this:

ls -1 $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/certs
# admin_operator.key.pem
# admin_operator.pem
# admin_operator.pfx
# bootstrapper.key
# certifier.key
# certifier.pem
# controller.crt
# controller.key
# nms_nginx.key
# nms_nginx.pem
# fluentd.key
# fluentd.pem
# rootCA.pem
# rootCA.key

Verify that the certificates are located in the path: $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/certs for creating a template that will be imported into the Kubernetes cluster.

Assemble the Configuration Files

To guarantee the logging functions of the orchestrator, it is necessary to build some configuration files that point to the EFK stack. These configuration files are in the path: $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/configs/orc8r. In this folder you need to generate the next files:

analytics.yml
analysisSchedule: 3
exportMetrics: false
metricsPrefix: ""
appSecret: ""
appID: ""
metricExportURL: ""

elastic.yml
elasticHost: elasticsearch-master
elasticPort: 9200

metricsd.yml
prometheusQueryAddress: "http://orc8r-prometheus:9090"
alertmanagerApiURL: "http://orc8r-alertmanager:9093/api/v2"
prometheusConfigServiceURL: "http://orc8r-prometheus-configurer:9100/v1"
alertmanagerConfigServiceURL: "http://orc8r-alertmanager-configurer:9101/v1"
useSeriesCache: true
profile: prometheus

orchestrator.yml
# useGRPCExporter: true
prometheusGRPCPushAddress: "orc8r-prometheus-cache:9092"
prometheusPushAddresses:
  - "http://orc8r-prometheus-cache:9091/metrics"

Validate that you created the files correctly:

ls -1 $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/configs/orc8r
# analytics.yml
# elastic.yml
# metricsd.yml
# orchestrator.yml

Change in Secrets sub-chart Files

There are several changes in secrets sub-charts before creating it. Edit the $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/values.yaml file with:

@@ -48,13 +48,14 @@ 
  # base directory holding config secrets.
  configs:
-    enabled: true
-    orc8r:
-      elastic.yml: |-
-        elasticHost: "elasticsearch-master"
-        elasticPort: 9200
-      analytics.yml: |-
-        analysisSchedule: 3
+    enabled: false
+    # orc8r:
+    # elastic.yml: |-
+    #   elasticHost: "elasticsearch-master"
+    #   elasticPort: 9200
+    # analytics.yml: |-
+    #   exportMetrics: false
+    #    analytics_app_id: magma

Edit the $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/templates/configs-orc8r.secret.yaml with:

@@ -23,15 +23,15 @@
{{- if .Values.secret.configs.enabled }}
# Template the defaults (metrics, orchestrator) first, will be overridden if
# passed in through values file
-{{ $orchestratorTemplate := include "orchestrator-config-template" .}}
-  orchestrator.yml: {{ $orchestratorTemplate | b64enc | quote }}
-{{- if .Values.thanos_enabled }}
-{{ $metricsdTemplate := include "metricsd-thanos-config-template" .}}
-  metricsd.yml: {{ $metricsdTemplate | b64enc | quote }}
-{{- else}}
-{{ $metricsdTemplate := include "metricsd-config-template" .}}
-  metricsd.yml: {{ $metricsdTemplate | b64enc | quote }}
-{{- end}}
+# {{ $orchestratorTemplate := include "orchestrator-config-template" .}}
+#   orchestrator.yml: {{ $orchestratorTemplate | b64enc | quote }}
+# {{- if .Values.thanos_enabled }}
+# {{ $metricsdTemplate := include "metricsd-thanos-config-template" .}}
+#   metricsd.yml: {{ $metricsdTemplate | b64enc | quote }}
+# {{- else}}
+# {{ $metricsdTemplate := include "metricsd-config-template" .}}
+#   metricsd.yml: {{ $metricsdTemplate | b64enc | quote }}
+# {{- end}}

Create the Secrets and Configuration Files Template

Go to the Orchestrator Helm package folder and use the template available for the secrets sub-chart to upload the certificates and configuration files:

cd $MAGMA_ROOT/orc8r/cloud/helm/orc8r/
helm template orc8r charts/secrets \
    --namespace magma \
    --set-string secret.certs.enabled=true \
    --set-string secret.configs.enabled=true \
    --set-file secret.certs.files."rootCA\.pem"=charts/secrets/.secrets/certs/rootCA.pem \
    --set-file secret.certs.files."bootstrapper\.key"=charts/secrets/.secrets/certs/bootstrapper.key \
    --set-file secret.certs.files."controller\.crt"=charts/secrets/.secrets/certs/controller.crt \
    --set-file secret.certs.files."controller\.key"=charts/secrets/.secrets/certs/controller.key \
    --set-file secret.certs.files."controller\.csr"=charts/secrets/.secrets/certs/controller.csr \
    --set-file secret.certs.files."rootCA\.key"=charts/secrets/.secrets/certs/rootCA.key \
    --set-file secret.certs.files."rootCA\.srl"=charts/secrets/.secrets/certs/rootCA.srl \
    --set-file secret.certs.files."admin_operator\.pem"=charts/secrets/.secrets/certs/admin_operator.pem \
    --set-file secret.certs.files."admin_operator\.pfx"=charts/secrets/.secrets/certs/admin_operator.pfx \
    --set-file secret.certs.files."admin_operator\.key\.pem"=charts/secrets/.secrets/certs/admin_operator.key.pem \
    --set-file secret.certs.files."certifier\.pem"=charts/secrets/.secrets/certs/certifier.pem \
    --set-file secret.certs.files."certifier\.key"=charts/secrets/.secrets/certs/certifier.key \
    --set-file secret.certs.files."nms_nginx\.pem"=charts/secrets/.secrets/certs/nms_nginx.pem \
    --set-file secret.certs.files."nms_nginx\.key\.pem"=charts/secrets/.secrets/certs/nms_nginx.key \
    --set-file secret.certs.files."fluentd\.pem"=charts/secrets/.secrets/certs/fluentd.pem \
    --set-file secret.certs.files."fluentd\.key"=charts/secrets/.secrets/certs/fluentd.key \
    --set-file secret.configs.orc8r."analytics\.yml"=charts/secrets/.secrets/configs/orc8r/analytics.yml \
    --set-file secret.configs.orc8r."elastic\.yml"=charts/secrets/.secrets/configs/orc8r/elastic.yml \
    --set-file secret.configs.orc8r."metricsd\.yml"=charts/secrets/.secrets/configs/orc8r/metricsd.yml \
    --set-file secret.configs.orc8r."orchestrator\.yml"=charts/secrets/.secrets/configs/orc8r/orchestrator.yml \
    --set=docker.registry=$REGISTRY \
    --set=docker.username=$REGISTRY_USERNAME \
    --set=docker.password=$REGISTRY_ACCESS_TOKEN > ~/secrets_apply_x.yaml
# Apply the template to the cluster.
kubectl apply -f ~/magma-charts/secrets_apply_x.yaml
kubectl get secrets -n magma
# ...
# orc8r-controller                              Opaque                                1      13d
# orc8r-secrets-certs                           Opaque                                16     13d
# orc8r-secrets-configs-cwf                     Opaque                                0      13d
# orc8r-secrets-configs-orc8r                   Opaque                                4      13d
# orc8r-secrets-envdir                          Opaque                                0      13d
# orc8r-secrets-registry                        kubernetes.io/dockerconfigjson        1      13d
# orc8r-service-reader-token-kx9tb              kubernetes.io/service-account-token   3      13d
# ...

Install Postgresql and create the PVCs

The Magma orchestrator needs, in version v1.6, for its operation: Postgresql.

Postgresql

Consider the following procedure to install Postgresql.

NOTE: It has to have a storageClass defined by default in your cluster. In this case longhorn. If not, the Postgresql installation will fail.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm repo list
# NAME            URL
# bitnami         https://charts.bitnami.com/bitnami
helm -n magma upgrade --install postgresql --set postgresqlPassword=postgres,postgresqlDatabase=magma,fullnameOverride=postgresql,global.storageClass=longhorn bitnami/postgresql

Create the PVCs

The Magma Orchestrator needs the following PVCs to operate:

  • grafanadashboards
  • grafanadata
  • grafanadatasources
  • grafanaproviders
  • promcfg
  • promdata

You can get what are the available storages for PVCs by running:

kubectl get sc
# NAME                  PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
# glusterfs (default)   kubernetes.io/glusterfs   Delete          Immediate           false                  147d
# longhorn (default)    driver.longhorn.io        Delete          Immediate           true                   3d3h

In this case, we are using longhorn as the default storage. Use the following template to create each PVC. The template has the name pvc.yaml.template:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ${pvcname}
  namespace: magma
spec:
  accessModes:
  - ${accessmode}
  resources:
    requests:
      storage: ${pvsize}
  storageClassName: ${storageclass}
  volumeMode: Filesystem

Then use the following script to create the PVCs:

#!/usr/bin/env bash

# Syntax: ./make_magma_pvcs.sh NAMESPACE STORAGE_CLASS_NAME ACCESS_MODE
# Example: ./make_magma_pvcs.sh magma longhorn ReadWriteMany

set -e
set +x
set +v
set -o pipefail

function usage() {
  echo
  echo "Syntax: $0 NAMESPACE STORAGE_CLASS_NAME ACCESS_MODE"
  echo "Example: $0 magma longhorn ReadWriteMany"
}

#PVC names and respective sizes
declare -A pvcs
pvcs[grafanadashboards]=2Gi
pvcs[grafanadata]=2Gi
pvcs[grafanadatasources]=2Gi
pvcs[grafanaproviders]=2Gi
pvcs[promcfg]=2Gi
pvcs[promdata]=64Gi

export namespace=$1
export storageclass=$2
export accessmode=$3

if [ -z "$1" ]; then
  echo "Error: Namespace required." 1>&2
  usage
  exit 1
fi

if [ -z "$2" ]; then
  echo "Error: storageClassName required." 1>&2
  usage
  exit 1
fi

if [ -z "$3" ]; then
  export accessmode="ReadWriteMany"
fi

for pvc in "${!pvcs[@]}"; do
  export pvcname=$pvc
  export pvsize="${pvcs[$pvc]}"
  echo "Creating pvc $pvcname size $pvsize in namespace $namespace..."
  envsubst < pvc.yaml.template | kubectl -n $namespace apply -f -
done

Create the PVCs using:

./make_magma_pvcs.sh magma longhorn ReadWriteMany

Deploy the Magma orchestrator

Add Helm package repository and install Orchestrator with custom values.yaml. If the repository is private:

helm repo add --username $GITHUB_USERNAME --password $GITHUB_ACCESS_TOKEN $GITHUB_REPO https://gitlab.com/api/v4/projects/$GITHUB_PROJECT_ID/packages/helm/stable
helm repo update magma-charts
# Install orchestrator:
helm -n magma upgrade --install orc8r magma-charts/orc8r --values facebook_values_x.yaml

facebook_values_x.yaml
# Copyright 2020 The Magma Authors.

# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

## Global values for NMS sub-chart
nms:
  enabled: true
  secret:
    certs: orc8r-secrets-certs
  imagePullSecrets:
  - name: orc8r-secrets-registry
  magmalte:
    create: true
    service:
      type: ClusterIP
    env:
      api_host: orc8r-nginx-proxy
      # host: 0.0.0.0
      # port: 8081
      mapbox_access_token: ""
      mysql_host: postgresql
      mysql_db: magma
      mysql_user: postgres
      mysql_pass: postgres
      mysql_dialect: postgres
      mysql_port: 5432
      grafana_address: orc8r-user-grafana:3000
    image:
      repository: registry.gitlab.com/everis_factory/fbc/magma/magmalte
      tag: latest
  nginx:
    create: true
    service:
      type: LoadBalancer
    replicas: 1
    deployment:
      spec:
        ssl_cert_name: controller.crt
        ssl_cert_key_name: controller.key

# Reference to one or more secrets to be used when pulling images
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets:
- name: orc8r-secrets-registry

## metrics sub-chart configuration.
metrics:
  enabled: true
  imagePullSecrets:
  - name: orc8r-secrets-registry
  metrics:
    volumes:
      prometheusConfig:
        volumeSpec:
          persistentVolumeClaim:
            claimName: promcfg
      prometheusData:
        volumeSpec:
          persistentVolumeClaim:
            claimName: promdata

  prometheus:
    create: true
    includeOrc8rAlerts: true
    prometheusCacheHostname: "orc8r-prometheus-cache"
    alertmanagerHostname: "orc8r-alertmanager"

  grafana:
    create: false
  prometheusCache:
    create: true
    image:
      repository: docker.io/facebookincubator/prometheus-edge-hub
      tag: 1.1.0
    limit: 500000

  alertmanager:
    create: true

  alertmanagerConfigurer:
    create: true
    image:
      repository: docker.io/facebookincubator/alertmanager-configurer
      tag: 1.0.4
    alertmanagerURL: "orc8r-alertmanager:9093"

  prometheusConfigurer:
    create: true
    image:
      repository: docker.io/facebookincubator/prometheus-configurer
      tag: 1.0.4
      prometheusURL: "orc8r-prometheus:9090"
  thanos:
    enabled: false

  userGrafana:
    create: true
    image:
      repository: grafana/grafana
      tag: 6.6.2
      pullPolicy: IfNotPresent
    volumes:
      dashboardproviders:
        persistentVolumeClaim:
          claimName: grafanaproviders
      dashboards:
        persistentVolumeClaim:
          claimName: grafanadashboards
      datasources:
        persistentVolumeClaim:
          claimName: grafanadatasources
      grafanaData:
        persistentVolumeClaim:
          claimName: grafanadata

# secrets sub-chart configuration.
secrets:
  create: false

# Define which secrets should be mounted by pods.
secret:
  configs:
    orc8r: orc8r-secrets-configs-orc8r
  envdir: orc8r-secrets-envdir
  certs: orc8r-secrets-certs

nginx:
  create: true

  # Configure pod disruption budgets for nginx
  # ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
  podDisruptionBudget:
    enabled: true

  # Service configuration.
  service:
    enabled: true
    legacyEnabled: true
    name: orc8r-bootstrapper-nginx
    annotations: {}
    extraAnnotations:
      bootstrapLagacy: {}
      clientcertLegacy: {}
      proxy: {}
    labels: {}
    type: LoadBalancer
    port:
      clientcert:
        port: 8443
        targetPort: 8443
        nodePort: ""
      open:
        port: 8444
        targetPort: 8444
        nodePort: ""
      api:
        port: 443
        targetPort: 9443
        nodePort: ""
      health:
        port: 80
        targetPort: 80
        nodePort: ""
    loadBalancerIP: ""
    loadBalancerSourceRanges: []

  # nginx image
  image:
    repository: registry.gitlab.com/everis_factory/fbc/magma/nginx
    tag: latest
    pullPolicy: IfNotPresent

  # Settings affecting nginx application
  spec:
    # magma controller domain name
    # hostname: "orc8r-nginx-proxy.magma.svc.cluster.local"
    hostname: "controller.magma.svc.cluster.local"
    # when nginx sees a variable in a server_name it needs a resolver
    # by default we'll use kube-dns
    resolver: "coredns.kube-system.svc.cluster.local valid=10s"

  # Number of nginx replicas desired
  replicas: 1

  # Resource limits & requests
  resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

  # Define which Nodes the Pods are scheduled on.
  # ref: https://kubernetes.io/docs/user-guide/node-selection/
  nodeSelector: {}

  # Tolerations for use with node taints
  # ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  tolerations: []

  # Assign nginx to run on specific nodes
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  affinity: {}

controller:
  # Configure pod disruption budgets for controller
  # ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
  podDisruptionBudget:
    enabled: true
    # minAvailable: 1
    # maxUnavailable: ""

  # orc8r_base image
  image:
    repository: registry.gitlab.com/everis_factory/fbc/magma/controller
    tag: latest
    pullPolicy: IfNotPresent

  spec:
    # Postgres/mysql configuration
    database:
      # driver: postgres
      # mysql_dialect: psql
      # mysql_db: magma
      db: magma
      # protocol: tcp
      # mysql_host: postgresql
      host: postgresql
      port: 5432
      # mysql_user: postgres
      # mysql_pass: postgres
      user: postgres
      pass: postgres
    service_registry:
      mode: "k8s"

  podAnnotations: {}

  # Number of controller replicas desired
  replicas: 1

  # Resource limits & requests
  resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

  # Define which Nodes the Pods are scheduled on.
  # ref: https://kubernetes.io/docs/user-guide/node-selection/
  nodeSelector: {}

  # Tolerations for use with node taints
  # ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  tolerations: []

  # Assign proxy to run on specific nodes
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  affinity: {}

accessd:
  service:
    labels: {}
    annotations: {}

analytics:
  service:
    labels: {}
    annotations: {}

bootstrapper:
  service:
    type: LoadBalancer
    labels: {}
    annotations: {}

certifier:
  service:
    labels: {}
    annotations: {}

configurator:
  service:
    labels: {}
    annotations: {}

ctraced:
  service:
    labels:
      orc8r.io/obsidian_handlers: "true"
      orc8r.io/swagger_spec: "true"
    annotations:
      orc8r.io/obsidian_handlers_path_prefixes: >
        /magma/v1/networks/:network_id/tracing,

device:
  service:
    labels: {}
    annotations: {}

directoryd:
  service:
    labels: {}
    annotations: {}

dispatcher:
  service:
    labels: {}
    annotations: {}

eventd:
  service:
    labels:
      orc8r.io/obsidian_handlers: "true"
      orc8r.io/swagger_spec: "true"
    annotations:
      orc8r.io/obsidian_handlers_path_prefixes: >
        /magma/v1/networks/:network_id/logs,
        /magma/v1/events,
        /magma/v1/events/:network_id,
        /magma/v1/events/:network_id/about,
        /magma/v1/events/:network_id/about/count,

metricsd:
  service:
    labels:
      orc8r.io/obsidian_handlers: "true"
      orc8r.io/swagger_spec: "true"
    annotations:
      orc8r.io/obsidian_handlers_path_prefixes: >
        /magma/v1/networks/:network_id/alerts,
        /magma/v1/networks/:network_id/metrics,
        /magma/v1/networks/:network_id/prometheus,
        /magma/v1/tenants/:tenant_id/metrics,
        /magma/v1/tenants/targets_metadata,

obsidian:
  service:
    labels: {}
    annotations: {}

orchestrator:
  service:
    labels:
      orc8r.io/analytics_collector: "true"
      orc8r.io/mconfig_builder: "true"
      orc8r.io/metrics_exporter: "true"
      orc8r.io/obsidian_handlers: "true"
      orc8r.io/state_indexer: "true"
      orc8r.io/stream_provider: "true"
      orc8r.io/swagger_spec: "true"
    annotations:
      orc8r.io/state_indexer_types: "directory_record"
      orc8r.io/state_indexer_version: "1"
      orc8r.io/stream_provider_streams: "configs"
      orc8r.io/obsidian_handlers_path_prefixes: >
        /,
        /magma/v1/channels,
        /magma/v1/networks,
        /magma/v1/events,
        /magma/v1/networks/:network_id,
        /magma/v1/about,

service_registry:
  service:
    labels: {}
    annotations: {}

state:
  service:
    labels: {}
    annotations: {}

streamer:
  service:
    labels: {}
    annotations: {}

tenants:
  service:
    labels:
      orc8r.io/obsidian_handlers: "true"
      orc8r.io/swagger_spec: "true"
    annotations:
      orc8r.io/obsidian_handlers_path_prefixes: >
        /magma/v1/tenants,
        /magma/v1/tenants/:tenants_id,

# Set True to create a CloudWatch agent to monitor metrics
cloudwatch:
  create: false

# logging sub-chart configuration.
logging:
  enabled: false

Deploy the Magma orchestrator LTE and FeG sub-charts

As mentioned before, the package.sh script generates another files like lte-orc8r-x.tgz and feg-orc8r-x.tgz. These files must to be installed in the same place of the orchestrator. For this purpose, use the next commands:

helm -n magma install feg-orc8r magma-charts/feg-orc8r --values ~/everis_docs/magma-charts/facebook_values_feg_orc8r_x.yaml
helm -n magma install lte-orc8r magma-charts/lte-orc8r --values ~/everis_docs/magma-charts/facebook_values_lte_orc8r_x.yaml

Where the files facebook_values_feg_orc8r_x.yaml and facebook_values_lte_orc8r_x.yaml depends on the version of the orchestrator and can be found at the following links:

In both files change the line repository: <registry>/orc8r_base to repository: registry.gitlab.com/everis_factory/fbc/magma/controller and use the $MAGMA_TAG variable to choose the version of the container.

Verify installation

Wait for the deployment of orc8r, lte-orc8r and feg-orc8r charts to take effect and check the status of the pods:

kubectl get pods -n magma
# NAME                                             READY   STATUS      RESTARTS   AGE
# nms-magmalte-685f5dfcc9-9bznh                    1/1     Running     0          24h
# nms-nginx-proxy-5f847bcf84-p86mj                 1/1     Running     0          27h
# orc8r-accessd-774df599d8-9zb62                   1/1     Running     0          24h
# orc8r-alertmanager-5f786b9fdb-8sxjw              1/1     Running     0          13d
# orc8r-alertmanager-configurer-7564f99f67-d2vwh   1/1     Running     0          13d
# orc8r-analytics-68dc78c88b-8k96v                 1/1     Running     0          24h
# orc8r-base-acct-5c5d57d69c-n2wjg                 1/1     Running     0          24h
# orc8r-bootstrapper-5c8b97696c-mnvcx              1/1     Running     0          24h
# orc8r-certifier-6f74754f6b-qt9p7                 1/1     Running     0          24h
# orc8r-configurator-7795d89f46-gdbt4              1/1     Running     0          24h
# orc8r-ctraced-f64c56d5b-zrws2                    1/1     Running     0          24h
# orc8r-device-5b846bc4f-5j6jj                     1/1     Running     0          24h
# orc8r-directoryd-764db666b-4c65r                 1/1     Running     0          24h
# orc8r-dispatcher-7845d79d7f-vbj2r                1/1     Running     0          24h
# orc8r-eventd-774846d46d-rngz9                    1/1     Running     0          22h
# orc8r-feg-57469d85f7-zkh2d                       1/1     Running     0          24h
# orc8r-feg-relay-7bc8895974-bq5sx                 1/1     Running     0          24h
# orc8r-ha-86bf884f5b-z8h6x                        1/1     Running     0          24h
# orc8r-health-76ffd66df5-d6hfp                    1/1     Running     0          24h
# orc8r-lte-d8c499489-x6nlx                        1/1     Running     0          24h
# orc8r-metricsd-5c796df555-67zpd                  1/1     Running     0          24h
# orc8r-nginx-6bd49fd84c-mbtfb                     1/1     Running     0          24h
# orc8r-nprobe-74b58fb756-lnvkv                    1/1     Running     0          24h
# orc8r-obsidian-8544766cff-6btlt                  1/1     Running     0          21h
# orc8r-orchestrator-849cf994dd-pqdxp              1/1     Running     0          24h
# orc8r-policydb-c89f46d58-mq5kq                   1/1     Running     0          24h
# orc8r-prometheus-57654765d9-2jtjv                1/1     Running     0          27h
# orc8r-prometheus-cache-9d8d9dcfc-h5hfz           1/1     Running     0          13d
# orc8r-prometheus-configurer-65fb9c996-68r2s      1/1     Running     0          13d
# orc8r-service-registry-59b9667954-2lqgw          1/1     Running     0          24h
# orc8r-smsd-55b654f7c6-8zghk                      1/1     Running     0          24h
# orc8r-state-6695b788bf-qmvvl                     1/1     Running     0          24h
# orc8r-streamer-5676667885-5tpd5                  1/1     Running     0          24h
# orc8r-subscriberdb-5bfc4bb76b-kcckx              1/1     Running     0          24h
# orc8r-subscriberdb-cache-9474f5dd9-7xcb8         1/1     Running     0          24h
# orc8r-tenants-684f5d96df-mkwlr                   1/1     Running     0          24h
# orc8r-user-grafana-7c7dfb7dd8-t8fbj              1/1     Running     0          13d
# postgresql-0                                     1/1     Running     0          13d

Post-install configuration

Based on the official documentation it is necessary to create an orchestrator admin user:

# Create Controller Certificate
export ORC_POD=$(kubectl --namespace magma get pod -l app.kubernetes.io/component=orchestrator -o jsonpath='{.items[0].metadata.name}')
kubectl -n magma exec -it ${ORC_POD} -- /var/opt/magma/bin/accessc add-existing -admin -cert /var/opt/magma/certs/admin_operator.pem admin_operator
kubectl -n magma exec -it ${ORC_POD} -- /var/opt/magma/bin/accessc list-certs     # <-- Verify the admin user was successfully created. 
# Serial Number: 83550F07322CEDCD; Identity: Id_Operator_admin_operator; Not Before: 2020-06-26 22:39:55 +0000 UTC; Not After: 2030-06-24 22:39:55 +0000 UTC

Next, create an NMS admin user. Create for both, master and magma-test organization:

# Create Admin NMS User
export NMS_POD=$(kubectl -n magma get pod -l  app.kubernetes.io/component=magmalte -o jsonpath='{.items[0].metadata.name}')
kubectl -n magma exec -it ${NMS_POD} -- yarn setAdminPassword master <USER_EMAIL> <PASSWORD>
kubectl -n magma exec -it ${NMS_POD} -- yarn setAdminPassword magma-test <USER_EMAIL> <PASSWORD>

Access to NMS and API Controller

Once the orchestrator is deployed, you can access the NMS and the API. See the service running in cluster using kubectl -n magma get svc.

kubectl get svc -n magma
# NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                    AGE
# magmalte                                                 ClusterIP      10.233.61.51    <none>          8081/TCP                                                   75d
# nginx-proxy                                              LoadBalancer   10.233.39.105   192.168.1.245   443:31421/TCP                                              75d
# orc8r-accessd                                            ClusterIP      10.233.55.25    <none>          9180/TCP                                                   75d
# orc8r-alertmanager                                       ClusterIP      10.233.61.14    <none>          9093/TCP                                                   75d
# orc8r-alertmanager-configurer                            ClusterIP      10.233.47.7     <none>          9101/TCP                                                   75d
# orc8r-analytics                                          ClusterIP      10.233.48.19    <none>          9180/TCP                                                   75d
# orc8r-base-acct                                          ClusterIP      10.233.26.221   <none>          9180/TCP                                                   64d
# orc8r-bootstrapper                                       ClusterIP      10.233.48.212   <none>          9180/TCP                                                   75d
# orc8r-bootstrapper-nginx                                 LoadBalancer   10.233.38.69    192.168.1.249   80:31458/TCP,443:32498/TCP,8444:31913/TCP                  19d
# orc8r-certifier                                          ClusterIP      10.233.28.240   <none>          9180/TCP                                                   75d
# orc8r-clientcert-nginx                                   LoadBalancer   10.233.22.151   192.168.1.246   80:30318/TCP,443:30960/TCP,8443:32686/TCP                  75d
# orc8r-configurator                                       ClusterIP      10.233.22.102   <none>          9180/TCP                                                   75d
# orc8r-csfb                                               ClusterIP      10.233.2.68     <none>          9180/TCP                                                   75d
# orc8r-ctraced                                            ClusterIP      10.233.55.216   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-device                                             ClusterIP      10.233.42.114   <none>          9180/TCP                                                   75d
# orc8r-directoryd                                         ClusterIP      10.233.40.192   <none>          9180/TCP                                                   75d
# orc8r-dispatcher                                         ClusterIP      10.233.33.254   <none>          9180/TCP                                                   75d
# orc8r-eventd                                             ClusterIP      10.233.36.39    <none>          9180/TCP,8080/TCP                                          75d
# orc8r-feg                                                ClusterIP      10.233.28.6     <none>          9180/TCP,8080/TCP                                          75d
# orc8r-feg-hello                                          ClusterIP      10.233.39.26    <none>          9180/TCP                                                   75d
# orc8r-feg-relay                                          ClusterIP      10.233.11.197   <none>          9180/TCP                                                   75d
# orc8r-ha                                                 ClusterIP      10.233.9.71     <none>          9180/TCP                                                   75d
# orc8r-health                                             ClusterIP      10.233.17.31    <none>          9180/TCP                                                   75d
# orc8r-lte                                                ClusterIP      10.233.58.145   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-metricsd                                           ClusterIP      10.233.2.149    <none>          9180/TCP,8080/TCP                                          75d
# orc8r-nginx-proxy                                        LoadBalancer   10.233.10.80    192.168.1.248   80:30307/TCP,8443:30464/TCP,8444:32279/TCP,443:32745/TCP   75d
# orc8r-nprobe                                             ClusterIP      10.233.3.100    <none>          9180/TCP,8080/TCP                                          14d
# orc8r-obsidian                                           ClusterIP      10.233.54.248   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-ocs                                                ClusterIP      10.233.28.119   <none>          9180/TCP                                                   75d
# orc8r-orchestrator                                       ClusterIP      10.233.21.250   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-pcrf                                               ClusterIP      10.233.31.133   <none>          9180/TCP                                                   75d
# orc8r-policydb                                           ClusterIP      10.233.42.210   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-prometheus                                         ClusterIP      10.233.15.146   <none>          9090/TCP                                                   75d
# orc8r-prometheus-cache                                   ClusterIP      10.233.15.26    <none>          9091/TCP,9092/TCP                                          75d
# orc8r-prometheus-configurer                              ClusterIP      10.233.28.196   <none>          9100/TCP                                                   75d
# orc8r-s6a-proxy                                          ClusterIP      10.233.51.84    <none>          9180/TCP                                                   75d
# orc8r-s8-proxy                                           ClusterIP      10.233.36.102   <none>          9180/TCP                                                   75d
# orc8r-service-registry                                   ClusterIP      10.233.3.182    <none>          9180/TCP                                                   75d
# orc8r-session-proxy                                      ClusterIP      10.233.34.133   <none>          9180/TCP                                                   75d
# orc8r-smsd                                               ClusterIP      10.233.55.223   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-state                                              ClusterIP      10.233.13.33    <none>          9180/TCP                                                   75d
# orc8r-streamer                                           ClusterIP      10.233.13.172   <none>          9180/TCP                                                   75d
# orc8r-subscriberdb                                       ClusterIP      10.233.35.229   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-subscriberdb-cache                                 ClusterIP      10.233.40.20    <none>          9180/TCP,8080/TCP                                          14d
# orc8r-swx-proxy                                          ClusterIP      10.233.41.242   <none>          9180/TCP                                                   75d
# orc8r-tenants                                            ClusterIP      10.233.45.165   <none>          9180/TCP,8080/TCP                                          75d
# orc8r-user-grafana                                       ClusterIP      10.233.28.245   <none>          3000/TCP                                                   75d
# postgresql                                               ClusterIP      10.233.38.145   <none>          5432/TCP                                                   75d
# postgresql-headless                                      ClusterIP      None            <none>          5432/TCP                                                   75d

Access to NMS

Now, the IP address that redirect you to the UI is the CLUSTER-IP or EXTERNAL-IP of the nginx-prpxy service. To access, use a tunnel or a load balancer service. In this case, I provide you an example using a tunnel using as jump host, a node of the cluster. Consider that:

  • Jump host IP: 10.30.23.6
  • Jump host user: ubuntu
  • nginx-proxy CLUSTER-IP: 10.233.10.159
  • Port to redirect: 443

In your host, use:

ssh [email protected] -L 443:10.233.10.159:443

The tunnel is performed. Next, add the next lines to your /etc/hosts file:

127.0.0.1   master.magma.svc.cluster.local
127.0.0.1   magma-test.magma.svc.cluster.local

In your browser (tested in Firefox), go to https://magma-test.magma.svc.cluster.local. You must seen the next UI:

NMS UI

Now login in the NMS UI with the <USER_EMAIL> and <PASSWORD> that you spcified in the Post-install configuration section. If the URL is https://master.magma.svc.cluster.local you must to see the next:

Master NMS_UI

You can go to the organizations UI going to https://magma-test.magma.svc.cluster.local. Access with the same credentials. In the screen you must to see the next:

Master NMS_UI

Access to API Controller

To access, use a tunnel or a load balancer service. In this case, I provide you an example using a tunnel using as jump host, a node of the cluster. Consider that:

  • Jump host IP: 10.30.23.6
  • Jump host user: ubuntu
  • orc8r-nginx-proxy CLUSTER-IP: 10.233.62.93
  • Port to redirect: 443

In your host, use:

ssh [email protected] -L 443:10.233.62.93:443

Please note that you must import the certificate admin_operator.pfx in your browser. The certificate must be on $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/secrets/.secrets/certs folder of the orchestrator based on Docker deployment machine. The certificate password is magma. When you access to https://magma-test.magma.svc.cluster.local/apidocs/v1/ you should see the following interface:

magma API interface

For an idea of how the API can be used, check out this magma documentation. Although the documentation in this link is for the Federation Gateway, it gives an idea of how the API can be used.

Upgrade process

If you perform some change in the facebook_values_x.yaml or want to use another values.yaml, please consider to upgrade your deployment using:

helm -n magma upgrade --install orc8r magma-charts/orc8r --values <path_to_new_values>
helm -n magma upgrade --install lte-orc8r magma-charts/lte-orc8r --values <path_to_new_values>
helm -n magma upgrade --install feg-orc8r magma-charts/feg-orc8r --values <path_to_new_values>

Cleaning process

The advantage of using Helm, is that you can do a quick process of clearing the display using the following command:

helm -n magma delete <deployment>

Troubleshooting

This section is dedicated to reporting the known issues behind implementing the Orchestrator on Kubernetes using an on-premises deployment.

storeconfig-job.yaml issue in metrics pod deployment

On an initial Orchestrator deployment, you might have a single pod named orc8r-metrics-storeconfig-<id> with the following error:

kubectl logs orc8r-metrics-storeconfig-g7p2l -n magma
# fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
# ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.14/main: temporary error (try again later)
# WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/main: No such file or directory
# fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
# ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.14/community: temporary error (try again later)
# WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/community: No such file or directory
# 2 errors; 14 distinct packages available
# cp: unrecognized option: n
# BusyBox v1.33.1 () multi-call binary.

The issue consists of, when the Helm package is builded using the $MAGMA_ROOT/orc8r/tools/helm/package.sh -d all script, the template $MAGMA_ROOT/orc8r/cloud/helm/orc8r/charts/metrics/templates/storeconfig-job.yaml have a configuration that not work with the Alpine image. Change:

@@ -36,9 +36,9 @@
             - -c
             - |
               apk update && apk add --no-cache coreutils
-              cp -n /mnt/defaults/alertmanager.yml /mnt/configs/
+              cp -u /mnt/defaults/alertmanager.yml /mnt/configs/
               mkdir -p /mnt/configs/alert_rules && chmod +x /mnt/configs/alert_rules
-              cp -n /mnt/defaults/*rules.yml /mnt/configs/alert_rules/
+              cp -u /mnt/defaults/*rules.yml /mnt/configs/alert_rules/
           volumeMounts:
             - name: defaults
               mountPath: /mnt/defaults

After you execute the change, you need to rerun the $MAGMA_ROOT/orc8r/tools/helm/package.sh -d all script and the orchestrator installation.

⚠️ **GitHub.com Fallback** ⚠️