prometheus - ghdrako/doc_snipets GitHub Wiki
Client library
- https://prometheus.io/docs/instrumenting/clientlibs/
- https://prometheus.github.io/client_java/
- https://github.com/prometheus/client_python
Exporters
- MySQL Exporter
- Node Exporter
- HAProxy Exporter
- Consul exporter
- Memcached exporter
- AWS CloudWatch exporter
- Collectd exporter
- Graphite exporter
- InfluxDB exporter
- JMX exporter
- SNMP exporter
- StatsD exporter
Blackbox exporter
Storage
Prometheus storage design is a combination of an index implementation using posting lists for all currently stored labels with their values, and its own time series data format.
The way Prometheus stores collected data locally can be seen as a three-part process.
-
Memmory - The freshest batch of data is kept in memory for up to two hours. This includes one or more chunks of data that are gathered during the two-hour time window. This approach dramatically reduces disk I/O two fold; the most recent data is available in memory, making it blazingly fast to query; and the chunks of data are created in memory, avoiding constant disk writes.
-
Write ahead log - a write-ahead log (WAL) in disk keeps the state of the in-memory data so that it can be replayed if Prometheus, for any reason, crashes or restarts.
-
Disk - After the two-hour time window, the chunks get written to disk. These chunks are immutable and, even though data can be deleted, it's not an atomic operation. Instead, tombstone files are created with the information of the data that's no longer required.
-
https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704
Prometheus stores up to 2 hours of recently scraped metrics in write-ahead log aka WAL. Every two hours it runs a compaction job, which converts the data from WAL into a persistent block. Prometheus converts data from WAL into a persistent block on graceful shutdown (e.g. when SIGINT or SIGTERM signal is sent to it and the controlling process waits until Prometheus process successfully finishes). If Prometheus is shut down in not greetfully way (SIGKILL, out of memory) the WAL data with recently collected samples (up to 2 hours) isn't stored in a persistent block, so it becomes lost. Prometheus tries replaying WAL (e.g. recovering data from WAL) on the next start, but it may fail to do so if WAL data is corrupted due to the previous unclean shutdown of the Prometheus.
Storage Layout
Data is organized into a series of directories (blocks) containing the data chunks, the LevelDB index for that data, a meta.json file with human-readable information about the block, and tombstones for data that's no longer required. Each one of these blocks represents a database.
WAL
The current block for incoming samples is kept in memory and is not fully persisted. It is secured against crashes by a write-ahead log (WAL) that can be replayed when the Prometheus server restarts. Write-ahead log files are stored in the wal directory in 128MB segments. These files contain raw data that has not yet been compacted; thus they are significantly larger than regular block files. Prometheus will retain a minimum of three write-ahead log files. High-traffic servers may retain more than three WAL files in order to keep at least two hours of raw data.
./data
├── 01BKGV7JBM69T2G1BGBGM6KB12
│ └── meta.json
├── 01BKGTZQ1SYQJTR4PB43C8PD98
│ ├── chunks
│ │ └── 000001
│ ├── tombstones
│ ├── index
│ └── meta.json
├── 01BKGTZQ1HHWHV8FBJXW1Y3W0K
│ └── meta.json
├── 01BKGV7JC0RY8A6MACW02A2PJD
│ ├── chunks
│ │ └── 000001
│ ├── tombstones
│ ├── index
│ └── meta.json
├── chunks_head
│ └── 000001
└── wal
├── 000000002
└── checkpoint.00000001
└── 00000000
Important flags that configure local storage:
--storage.tsdb.path
: Where Prometheus writes its database. Defaults to data/--storage.tsdb.retention.time
: When to remove old data. Defaults to 15d. Overrides storage.tsdb.retention if this flag is set to anything other than default.--storage.tsdb.retention.size
: The maximum number of bytes of storage blocks to retain. The oldest data will be removed first. Defaults to 0 or disabled.--storage.tsdb.wal-compression
: Enables compression of the write-ahead log (WAL).
$ cat /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.external-url=http://34.89.26.156:9090 \
--storage.tsdb.retention.time=1y # or 365d
[Install]
WantedBy=multi-user.target
Alertmanager
$ amtool alert # View all currently firing alerts
$ amtool -o extended alert # View all currently firing alerts with extended output
# In addition to viewing alerts, you can use the rich query syntax provided by Alertmanager:
$ amtool -o extended alert query alertname="Test_Alert"
$ amtool -o extended alert query instance=~".+1"
$ amtool -o extended alert query alertname=~"Test.*"
$ amtool silence add alertname=Test_Alert # Silence an alert
$ amtool silence query # View silences
$ amtool silence expire b3ede22e-ca14-4aa0-932c-ca2f3445f926 # Expire a silence
# Expire all silences matching a query:
$ amtool silence query instance=~".+0"
$ amtool silence expire $(amtool silence -q query instance=~".+0")
$ amtool silence query instance=~".+0"
$ amtool silence expire $(amtool silence query -q) # Expire all silences