prometheus metrics exporters - ghdrako/doc_snipets GitHub Wiki
- download https://prometheus.io/download/#prometheus
- https://www.digitalocean.com/community/tutorials/how-to-use-alertmanager-and-blackbox-exporter-to-monitor-your-web-server-on-ubuntu-16-04
- https://linuxczar.net/blog/2017/06/15/prometheus-histogram-2/
- https://grafana.com/grafana/dashboards/8670-cluster-cost-utilization-metrics/
The term “exporter” in Prometheus refers to any application that runs independently to expose metrics from some other data source that is not exposing Prometheus metrics natively.
Node Exporter is only useful for systems with *NIX kernels (e.g., Linux, FreeBSD, MacOS, etc.). For Windows systems, a similar – but separate – exporter exists called the Windows Exporter (https://github.com/prometheus-community/windows_exporter).
In Linux systems, a /proc directory exists that contains a plethora of information about the state of the machine.
The Node Exporter primarily retrieves data through the /proc pseudo-filesystem. There are a few exceptions – such as the hwmon collector, which collects data from /sys/class/hwmon – but most collectors leverage the Prometheus procfs library (https://github.com/prometheus/procfs) to pull data from /proc and convert it to Prometheus metrics.
Most monitoring systems use a pushing mechanism where clients (applications /servers) are responsible for pushing their metric data to a centralized collection platform (server). In contrast to a “push” mechanic system, Prometheus is relying on targets (applications/servers) providing simple HTTP endpoints that its data retrieval workers can pull/scrape from.
- It works by pulling(scraping) real-time metrics from applications on a regular cadence by sending HTTP requests on metrics endpoints of applications.
- It gives the Client libraries that can be used to instrument custom applications including Go, Python, Ruby, Node.js, Java, .NET, Haskell, Erlang, and Rust.
- It collects data from application services and hosts, then compresses and stores them in a time-series database.
- For situations where pulling metrics is not feasible (e.g. short lived jobs) Prometheus provides a Pushgateway that allows applications to still push metric data if required.
Prometheus uses the pulling approach to metrics generation. That means that any system that produces metrics will run its own internal Prometheus client that keeps track of metrics.
The Prometheus server will pull periodically for metrics to all the configured applications that are collecting their metrics. These elements are called targets by Prometheus.
The easiest way to start a Prometheus server is to start the official Docker image.
- set up the configuration in the
prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
# scrape_timeout is set to the global default (10s).
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
# The target needs to point to your local IP address
# 192.168.1.196 IS AN EXAMPLE THAT WON'T WORK IN YOUR SYSTEM
- targets: ["192.168.1.196:8000"]
The config file has two main sections.
- The first with global indicates how often to scrape (to read information from the targets) and other general configuration values.
- The second, scrape_config, describes what to scrape from, and the main parameter is targets. Here, we need to configure all our targets. This one in particular needs to be described by its external IP, which will be the IP from your computer.
- Start container
docker run -p 9090:9090 -v /full/path/to/file/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
- Querying Prometheus Prometheus has its own query system, called PromQL.
rate(django_http_requests_latency_seconds_by_view_method_count[1m])
sum(rate(django_http_requests_latency_seconds_by_view_method_count[1m])) by (method)
histogram_quantile(0.95, rate(django_http_requests_latency_seconds_by_view_method_bucket[5m])) # plot histogram 0.95 quantile over a period of 5 minutes
https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/
Basic loggin https://prometheus.io/docs/guides/basic-auth/
import getpass
import bcrypt
password = getpass.getpass("password: ")
hashed_password = bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt())
print(hashed_password.decode())
$ python3 gen-pass.py
web.yml
basic_auth_users:
admin: $2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay
$2b$12$8errM8Agm72K0oixYxnXnOwzwkKrrG6s.domrKXGI5zkQTMY1DZyS pko123
$2a$12$KrF.TFtBNh0wEOF.L4yedOtI3ltketsp1IM8Ew0Du3kqRXrphpdO.
$ promtool check web-config web.yml
web.yml SUCCESS
configuration file: prometheus.yml
Chapter 1: Installation and Getting Started
Listing 1.11: The default Prometheus configuration file
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
-static_configs:
-targets:
# -alertmanager:9093
rule_files:
# -"first_rules.yml"
# -"second_rules.yml"
scrape_configs:
-job_name: 'prometheus'
static_configs:
-targets: ['localhost:9090']
Once you have configured one or more endpoints, you must scrap them from the Prometheus server.
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['endpoint-host-or-ip:9090']
Enable Scraping in Kubernetes: Scraping is either enabled globally or explicitly.
For explicit scraping add the following annotation for Pods or services.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics" # optional
prometheus.io/port: "9102"
Note that for Daemonsets you have to put the annotation in the template spec:
---
apiVersion: apps/v1beta2
kind: DaemonSet
spec:
[...]
template:
metadata:
[...]
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9102'
A pushgateway serves as an intermediary between jobs and Prometheus. The pushgateway gets metrics from these jobs and pushes them to the Prometheus server. https://prometheus.io/docs/practices/pushing/
You can use client libraries to expose metrics on an endpoint that can be scraped by Prometheus using HTTP requests. There are several language options you can select. You can find a complete list at https://prometheus.io/docs/instrumenting/clientlibs/. The idea behind the client libraries is to implement an endpoint that gets all the metrics that you need from the host and exposes them using HTTP. Then, Prometheus can scrap them by connecting to the endpoint through HTTP requests.
You can expose metrics from endpoints using exporters. These are pieces of software that have been developed either by the external community or the Prometheus GitHub organization – these are called official exporters.
You can find lists of all the available exporters at
- https://prometheus.io/docs/instrumenting/exporters/ and
- https://github.com/prometheus/prometheus/wiki/Default-port-allocations.
To export messages from MQTT brokers using either https://github.com/inovex/mqtt_blackbox_exporter or https://github.com/hikhvar/mqtt2prometheus. These exporters will let you subscribe to MQTT topics and expose them so that they canbbe scraped by Prometheus.
RabbitMQ, you can find the official exporter at https://www.rabbitmq.com/prometheus.html. This exporter doesn’t expose messages from devices. Instead, it provides the state of the RabbitMQ service by showing metrics such as queues, consumers, connections, and so on.
Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.
The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP.
- https://github.com/prometheus/blackbox_exporter
- https://www.opsramp.com/guides/prometheus-monitoring/prometheus-blackbox-exporter/
- Download: https://prometheus.io/download/#blackbox_exporter
- Oracle DB Exporter
- PgBouncer exporter
- PostgreSQL exporter
- MySQL router exporter
- MySQL server exporter (official)
- SQL exporter
https://prometheus.io/docs/instrumenting/exporters/#third-party-exporters
link | description |
---|---|
https://github.com/jonnenauha/prometheus_varnish_exporter | Varnish exporter for Prometheus |
https://github.com/infinityworks/github-exporter | Prometheus GitHub Exporter |
https://github.com/nlamirault/speedtest_exporter | Prometheus exporter for Speedtest metrics |
https://github.com/V3ckt0r/fluentd_exporter | Prometheus exporter for Fluentd |
https://github.com/pjhampton/kibana-prometheus-exporter | Prometheus metrics for Kibana |
https://github.com/jonnenauha/prometheus_varnish_exporter | Varnish exporter for Prometheus |