How to configure Prometheus and MinIO in k8s using Prometheus Operator - minio/wiki GitHub Wiki

Objective

This document provides steps to be used for setting up MinIO and Prometheus in k8s. It deploys Prometheus using Prometheus Operator

Steps

1. Create a k8s cluster using KIND

$ kind create cluster --config kind-config.yaml

where content of kind-config.yaml is

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker
  - role: worker
  - role: worker

2. Deploy MinIO operator using kubectl MinIO plugin

$ kubectl minio init

make sure kubectl MinIO plugin is locally available and installed

3. Deploy a MinIO tenant

$ kubectl create ns tenant-ns
$ kubcetl minio tenant create tenant1 --servers 4 --volumes 16 --capacity 16Gi --namespace tenant-ns

wait for MinIO tenant pods to come online

$ kubectl get pods -n tenant-ns 
NAME             READY   STATUS    RESTARTS   AGE
tenant1-ss-0-0   2/2     Running   0          96m
tenant1-ss-0-1   2/2     Running   0          96m
tenant1-ss-0-2   2/2     Running   0          96m
tenant1-ss-0-3   2/2     Running   0          96m

4. Deploy a debug pod for running mc and communicating with MinIO server

$ kubectl apply -f debug-pod.yaml

where content of debug-pod.yaml could be

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-pod
  labels:
    app: ubuntu
spec:
  containers:
  - image: ubuntu
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: ubuntu
  restartPolicy: Always

5. Setup debug pod with mc and create alias

$ kubectl exec -it ubuntu-pod -- bash

root@ubuntu-pod:/# apt install wget curl jq
root@ubuntu-pod:/# wget https://dl.min.io/client/mc/release/linux-amd64/mc
root@ubuntu-pod:/# chmod +x mc
root@ubuntu-pod:/# mv mc /usr/local/bin/
root@ubuntu-pod:/# mc -v
mc version RELEASE.2024-03-25T16-41-14Z (commit-id=7bac47fe04a4a26faa0e8515036f7ff2dfc48c75)
Runtime: go1.21.8 linux/amd64
Copyright (c) 2015-2024 MinIO, Inc.
License GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>

Get MinIO tenant creds and create alias inside debug pod

$ kubectl get secrets/tenant1-env-configuration -n tenant-ns -oyaml | yq '.data."config.env"' | base64 -d
export MINIO_ROOT_USER="081A8V5ANDHJVYBQGUQN"
export MINIO_ROOT_PASSWORD="Qfdq1xXoA3UjaHgxhvDmXqSMVm9Cu5hDr2EkbtUr"

$ kubcetl exec -it ubuntu-pod -- bash

root@ubuntu-pod:/# mc alias set myminio https://minio.tenant-ns.svc.cluster.local 081A8V5ANDHJVYBQGUQN Qfdq1xXoA3UjaHgxhvDmXqSMVm9Cu5hDr2EkbtUr

root@ubuntu-pod:/# mc admin info myminio
●  tenant1-ss-0-0.tenant1-hl.tenant-ns.svc.cluster.local:9000
   Uptime: 1 hour 
   Version: 2024-03-15T01:07:19Z
   Network: 4/4 OK 
   Drives: 4/4 OK 
   Pool: 1

●  tenant1-ss-0-1.tenant1-hl.tenant-ns.svc.cluster.local:9000
   Uptime: 1 hour 
   Version: 2024-03-15T01:07:19Z
   Network: 4/4 OK 
   Drives: 4/4 OK 
   Pool: 1

●  tenant1-ss-0-2.tenant1-hl.tenant-ns.svc.cluster.local:9000
   Uptime: 1 hour 
   Version: 2024-03-15T01:07:19Z
   Network: 4/4 OK 
   Drives: 4/4 OK 
   Pool: 1

●  tenant1-ss-0-3.tenant1-hl.tenant-ns.svc.cluster.local:9000
   Uptime: 1 hour 
   Version: 2024-03-15T01:07:19Z
   Network: 4/4 OK 
   Drives: 4/4 OK 
   Pool: 1

Pools:
   1st, Erasure sets: 1, Drives per erasure set: 16

100 MiB Used, 1 Bucket, 100 Objects
16 drives online, 0 drives offline

6. Deploy prometheus operator

$ git clone https://github.com/prometheus-operator/kube-prometheus.git
$ cd kube-prometheus
$ kubectl create -f manifests/setup
$ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
$ kubectl create -f manifests/

wait for all the pods under namespace monitoring to come up

$ kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          89m
alertmanager-main-1                    2/2     Running   0          89m
alertmanager-main-2                    2/2     Running   0          89m
blackbox-exporter-66988b997b-f28jp     3/3     Running   0          91m
grafana-5cf7c9c975-lsqzv               1/1     Running   0          91m
kube-state-metrics-694f78fd74-hldpx    3/3     Running   0          90m
node-exporter-c8lqf                    2/2     Running   0          90m
node-exporter-hhwnz                    2/2     Running   0          90m
node-exporter-tvf7p                    2/2     Running   0          90m
node-exporter-wnvhp                    2/2     Running   0          90m
node-exporter-zrhf8                    2/2     Running   0          90m
prometheus-adapter-8597b9c4fc-gb5b8    1/1     Running   0          90m
prometheus-adapter-8597b9c4fc-q2d8d    1/1     Running   0          90m
prometheus-k8s-0                       2/2     Running   0          83m
prometheus-k8s-1                       2/2     Running   0          83m
prometheus-operator-5499b7f696-wxk6m   2/2     Running   0          83m

7. Create custom additional scrape configuration for getting metrics from MinIO

Create a prometheus config file prometheus-additional.yaml with below content

- job_name: minio-job
  bearer_token: <token>
  metrics_path: /minio/v2/metrics/cluster
  scheme: https
  tls_config:
    insecure_skip_verify: true
  static_configs:
  - targets: [minio.tenant-ns.svc.cluster.local]
- job_name: minio-job-node
  bearer_token: <token>
  metrics_path: /minio/v2/metrics/node
  scheme: https
  tls_config:
    insecure_skip_verify: true
  static_configs:
  - targets: [tenant1-ss-0-0.tenant1-hl.tenant-ns.svc.cluster.local:9000,tenant1-ss-0-1.tenant1-hl.tenant-ns.svc.cluster.local:9000,tenant1-ss-0-2.tenant1-hl.tenant-ns.svc.cluster.local:9000,tenant1-ss-0-3.tenant1-hl.tenant-ns.svc.cluster.local:9000]

The bearer token can be extracted from debug pod by running the command mc admin prometheus generate myminio

Now create a secret out of above prometheus configuration and create the secret in k8s

$ kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run=client -oyaml > additional-scrape-configs.yaml

$ kubectl apply -f additional-scrape-configs.yaml -n monitoring

Finally, edit the prometheus Custom Resource to use this additional scrape configuration

$ kubectl edit prometheus -n monitoring

The required changes are as below

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  creationTimestamp: "2024-03-27T10:38:09Z"
  generation: 2
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: k8s
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.50.1
  name: k8s
  namespace: monitoring
  resourceVersion: "4813"
  uid: 92870ef5-54ff-4587-b789-1284e4fed0e4
spec:
  additionalScrapeConfigs:
    key: prometheus-additional.yaml
    name: additional-scrape-configs
  serviceMonitorSelector: <<<<======== THIS IS NEEDED AS EMPTY VALUE ERRORS OUT
    matchLabels:
      foo: bar

8. Access the prometheus console 👍

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090

you can access the prometheus console at http://localhost:9090/ now 👍

⚠️ **GitHub.com Fallback** ⚠️