postgres extension pg_stat_statement - ghdrako/doc_snipets GitHub Wiki

pg_stat_statements is useed for monitoring your queries, and extract information for the historically best to worst performing queries, allowing you to know what queries should be optimized first. This extension can also be used in conjunction with load tests, allowing you to pinpoint queries that contribute to potential bottlenecks in your application.

pg_stat_statements is included in the contrib module, so it ships with standard Postgres, but might not be automatically enabled:

  1. If you're using a managed Postgres system such as Crunchy Bridge, pg_stat_statements may have already been added in postgresql.conf. (Tip: run SHOW shared_preload_libraries; to check.) In that case you can skip to the next step.
  2. Log back in to Postgres. In the database which you want to access pg_stat_statements, run:
CREATE EXTENSION pg_stat_statements;

Setting up pg_stat_statements

Options offered by the extension to customize how it behaves.

  • pg_stat_statements.max – sets the max number of statements PostgreSQL will keep track of.hink of this as rows in the pg_stat_statements table. The Default is 5000.

  • pg_stat_statements.track – Default is top.

    • all – Tracks all statements, including those inside function calls
    • top – Tracks only statements issued by clients
    • none – disable collection
  • pg_stat_statements.track_utility – On or Off . Tracks whether or not queries besides just SELECT, UPDATE, DELETE AND INSERT queries are tracked. Default is on.

  • pg_stat_statements.save – On or Off. This sets whether the data should reset when the instance restarts, setting to off will reset the * pg_stat_statements table after instance restart.

$ cat postgresql.conf 
pg_stat_statements.max = 10000
pg_stat_statements.track = all
track_activity_query_size = 2048

shared_preload_libraries = 'pg_stat_statements'

In Google Cloud SQL, this step above is not required as it is already supported.

Add extension

CREATE EXTENSION IF NOT EXISTS "pg_stat_statements";

If you run on a major cloud provider there is a strong likelihood they have already installed and enabled it for you.

Drop extension

drop extension pg_stat_statements;

Reset stats:

SELECT pg_stat_reset(); -- Local database
SELECT pg_stat_statements_reset(); -- For Google Cloud SQL

Useful after a significant optimization effort to see if the new query performs better.

Identify Inefficient Queries through Monitoring

SELECT
  query,
  calls,
  total_exec_time,
  rows,
  100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
FROM
  pg_stat_statements
ORDER BY
  total_exec_time DESC
LIMIT
  5;

This SQL command allows identifying the most critical queries for performance and thus represents a valuable first step in the optimization of the database.

Usage

Once pg_stat_statements is installed, it begins silently going to work under the covers. Pg_stat_statements records queries that are run against your database, strips out a number of variables from them, and then saves data about the query, such as how long it took, as well as what happened to underlying reads/writes.

Note: It doesn’t save each individual query, rather it parameterizes them and then saves the aggregated result

View pg_stat_statements

\d pg_stat_statements
SELECT * FROM pg_stat_statements;
SELECT
  (total_exec_time / 1000 / 60) as total_min,
  mean_exec_time as avg_ms,
  calls,
  query
FROM pg_stat_statements
ORDER BY 1 DESC
LIMIT 500;

columns that will be the most useful for spotting potential problems with your queries.

  • calls: the number of times this query has been called.
  • total_exec_time: the total time spent executing the query, in milliseconds.
  • rows: the total number of rows retrieved by this query.
  • shared_blks_hit: the number of blocks already cached when read for the query.
  • shared_blks_read: the number of blocks that had to be read from the disk to satisfy all calls for this query form.

Three quick reminders about the data columns above:

  • All values are cumulative since the last time the service was started, or a superuser manually resets the values.
  • All values are for the same query form after parameterizing the query and based on the resulting hashed queryid.
  • The current configuration for Timescale Cloud services does not track query planning statistics because of the small added overhead. We may allow this through user configuration in the future.

Top 10 I/O-intensive queries

select userid::regrole, dbid, query
    from pg_stat_statements
    order by (blk_read_time+blk_write_time)/calls desc
    limit 10;

Top 10 time-consuming queries

select userid::regrole, dbid, query
    from pg_stat_statements
    order by mean_time desc
    limit 10;

Top 10 response-time outliers

select userid::regrole, dbid, query
    from pg_stat_statements
    order by stddev_time desc
    limit 10;

Top 10 queries by memory usage

select userid::regrole, dbid, query
    from pg_stat_statements
    order by (shared_blks_hit+shared_blks_dirtied) desc
    limit 10;

Top 10 consumers of temporary space

select userid::regrole, dbid, query
    from pg_stat_statements
    order by temp_blks_written desc
    limit 10;

Top 100 very frequently queries run, as well as what they consume on average:

SELECT 
  (total_time / 1000 / 60) as total, 
  (total_time/calls) as avg, 
  query 
FROM pg_stat_statements 
ORDER BY 1 DESC 
LIMIT 100;

Long-Running Queries

SELECT calls, 
	mean_exec_time, 
	query
FROM pg_stat_statements
WHERE calls > 500
AND shared_blks_hit > 0
ORDER BY mean_exec_time DESC
LIMIT 10;

Hit Cache Ratio

SELECT calls, 
	shared_blks_hit,
	shared_blks_read,
	shared_blks_hit/(shared_blks_hit+shared_blks_read)::NUMERIC*100 hit_cache_ratio,
	query
FROM pg_stat_statements
WHERE calls > 500
AND shared_blks_hit > 0
ORDER BY calls DESC, hit_cache_ratio ASC
LIMIT 10;

Queries With High Standard Deviation

SELECT calls, 
	min_exec_time,
	max_exec_time, 
	mean_exec_time,
	stddev_exec_time,
	(stddev_exec_time/mean_exec_time) AS coeff_of_variance,
	query
FROM statements
WHERE calls > 500
AND shared_blks_hit > 0
ORDER BY mean_exec_time DESC
LIMIT 10;

Note on I/O stats

A tip from my colleague Greg Smith: if you want to use the block read and write time (blk_read_time and blk_write_time) statistics, you also need to turn on the track_io_timing parameter in postgresql.conf. That's recommended if you want all the features pg_stat_statements can deliver. It's disabled by default because timing things is very slow on some systems.

To check how fast the timers in your system are, use the pg_test_timing utility. The overhead of collecting all this timing data is low for most hardware.

I hadn't enabled track_io_timing, so we've focused on more simple examples for now. Do keep an eye out for a future deep dive into this topic on our blog.

Logs

query texts are "kept in an external disk file, and do not consume shared memory" (taken from the official docs). pgstat_statements should leave only a relatively small footprint on your system especially compared to logging _all of the things. That said, you could also make sure to set a lower threshold on pg_stat_statements.max, or set only certain types of statements to be tracked using the pg_stat_statements.track parameters.