Prometheus Stack - Kulichanin/speedtest GitHub Wiki

Install prometheus stack

Installs core components of the kube-prometheus-stack, a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

Before start

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm pull prometheus-community/kube-prometheus-stack --untar

Change config in kube-prometheus-stack/values.yaml

change service nodeexporter

type: LoadBalancer

change service grafana

service:
    portName: http-web
    ipFamilies: \\\[\\\]
    ipFamilyPolicy: ""
    type: "LoadBalancer" 

change service prometheus

    ## Service type
    ##
    type: LoadBalancer

Install

helm install kube-prometheus-stack --namespace monitoring --create-namespace --wait -f kube-prometheus-stack/values.yaml ./kube-prometheus-stack

Get password

grep adminPassword: values.yml

Get matchLabels

kubectl get prometheus -n monitoring kube-prometheus-stack-prometheus -o json | jq .spec.probeSelector

Install Prometheus adapter

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm pull prometheus-community/prometheus-adapter

Change path with install

helm -n monitoring upgrade --install prometheus-adapter prometheus-community/prometheus-adapter --set prometheus.url=kube-prometheus-stack-prometheus.monitoring.svc

Example of hpa autoscaling setup whoami app

apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - image: bee42/whoami:2.2.0
        name: whoami
        ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    service: whoami
  name: whoami
spec:
  ports:
  - name: http
    port: 80
  selector:
    app: whoami
  type: LoadBalancer
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: whoami
  labels:
    release: kube-prometheus-stack
spec:
  selector:
    matchLabels:
      service: whoami
  endpoints:
  - port: http
---      
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
  name: hpa-whoami
spec:
  scaleTargetRef:
    # point the HPA at the sample application
    # you created above
    apiVersion: apps/v1
    kind: Deployment
    name: whoami
  # autoscale between 1 and 10 replicas
  minReplicas: 1
  maxReplicas: 10
  metrics:
  # use a "Pods" metric, which takes the average of the
  # given metric across all pods controlled by the autoscaling target
  - type: Pods
    pods:
      # use the metric that you used above: pods/http_requests
      metric:
        name: http_requests
      # target 500 milli-requests per second,
      # which is 1 request every two seconds
      target:
        type: Value
        averageValue: 1000m