postgres prometheus monitoring - ghdrako/doc_snipets GitHub Wiki
- https://nelsoncode.medium.com/how-to-monitor-posgresql-with-prometheus-and-grafana-docker-36d216532ea2
- https://medium.com/@murat.bilal/monitoring-postgresql-with-grafana-and-prometheus-in-docker-7fe6a36ef7b1
- https://github.com/Vonng/pg_exporter
- https://github.com/timescale/pg_prometheus
- https://github.com/free/sql_exporter
pg_prometheus
Extension for PostgreSQL that defines a Prometheus metric samples data type and provides several storage formats for storing Prometheus data.
Exporting PostgreSQL metrics
- Install the pg_prometheus extension on the PostgreSQL instance.
CREATE EXTENSION pg_prometheus;
- Enable the pg_prometheus extension in the PostgreSQL instance. configure the pg_prometheus extension to expose the PostgreSQL metrics:
pg_prometheus.listen_addresses = 'localhost'
pg_prometheus.port = 9187
- collecting PostgreSQL metrics
YAML
scrape_configs:
- job_name: 'postgresql'
scrape_interval: 10s
static_configs:
- targets: ['localhost:9187']
- setting up Prometheus alerts
groups:
- name: 'PostgreSQL alerts'
rules:
- alert: High CPU usage
expr: postgresql_cpu_usage > sum(rate(postgresql_cpu_usage[5m])) by (instance) > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: High CPU usage on PostgreSQL {{ $labels.instance }}
description: '{{ $labels.instance }} has high CPU
In this example, we are defining an alert rule named High CPU usage that triggers a warning when the sum of the rate of CPU usage for PostgreSQL instances is greater than 80% over a 5-minute window. The alert has a severity label of warning and includes annotations for the alert summary and description. To reload the Prometheus configuration, run the following command:
curl -X POST http://localhost:9090/-/reload
sending alerts to a notification channel We will use Alertmanager to send alerts to a notification channel, such as email or Slack. Follow these steps to set up Alertmanager:
- Install Alertmanager on GCP.
- Configure Alertmanager to send alerts to a notification channel. Here’s an example of how to configure Alertmanager to send alerts to an email address:
YAML
route:
group_by: ['alertname', 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
routes:
- match:
severity: warning
receiver: email-alerts
receivers:
- name: email-alerts
email_configs:
- to: '[email protected]'
from: '[email protected]'
smarthost: smtp.gmail.com:587
auth_username: '[email protected]'
auth_password: 'yourpassword'
starttls_require: true
In this example, we are configuring Alertmanager to send alerts with a severity label of warning to an email address. We are specifying the email address to send the alerts to, as well as the email address and credentials to use for authentication.