postgres WAL - ghdrako/doc_snipets GitHub Wiki

PostgreSQL stores transaction log information in the pg_wal (in PostgreSQL 9.6 and earlier: pg_xlog) directory. The pg_wal directory contains a sequence of log files, each of which is called a WAL segment. The transaction log records the changes to the database in chronological order, including information about each transaction such as the type of transaction, the date and time of the transaction, the user who initiated the transaction, and the data that was changed.

In PostgreSQL 15, the WAL system uses a segmented architecture for improved performance and reliability. Each WAL segment is a fixed-size file on disk (default is 16 MB), and the WAL files are written sequentially in a circular buffer.

WAL configuration

SHOW wal_level;   
SHOW max_wal_size;   
SHOW min_wal_size;

Checkpoint configuration

SHOW checkpoint_timeout;  
SHOW checkpoint_warning;   
SHOW checkpoint_completion_target;  
SHOW checkpoint_flush_after;

WAL monitoring

Monitor database performance metrics such as transaction throughput, checkpoint frequency, and disk I/O.

select * from pg_settings where name like '%wal%';
select * from pg_ls_waldir();
select * FROM pg_stat_archiver;

Use tools like pg_stat_bgwriter__ and pg_stat_progress_vacuum`` to analyze the checkpoint activity and vacuuming process.

WAL size

-- Postgres 10+


select pg_size_pretty(sum(size)) from pg_ls_waldir();

-- as superuser
select * from pg_ls_dir('pg_xlog');

select count(*) * pg_size_bytes(current_setting('wal_segment_size')) as total_size
from pg_ls_dir('pg_xlog') as t(fname)
where fname <> 'archive_status';

--Alternatively you can use pg_stat_file() to return information about the files:

select sum((pg_stat_file('pg_wal/'||fname)).size) as total_size
from pg_ls_dir('pg_xlog') as t(fname);

--Pg 10 also provides for pg_current_wal_flush_lsn(), pg_current_wal_insert_lsn(), pg_current_wal_lsn() which you can feed to pg_walfile_name()


WAL

WAL entries are often smaller than the page size. Actions are recorded in WAL:

  • page modifications performed in the buffer cache—since writes are deferred
  • transaction commits and rollbacks—since the status change happens in ���� buffers and does not make it to disk right away
  • file operations (like creation and deletion of files and directories when tables get added or removed)—since such operations must be in sync with data changes

WAL Structure

WAL is a stream of log entries of variable length. Each entry contains

  • header
  • data

WAL files take up special buffers in the server’s shared memory. The size of the cache used by WAL is defined by the wal_buffers parameter. By default, this size is chosen automatically as 1/ 32 of the total buffer cache size. WAL cache is quite similar to buffer cache, but it usually operates in the ring buf f er mode: new entries are added to its head, while older entries are saved to disk start-ing at the tail. If WAL cache is too small, disk synchronization will be performed more often than necessary.

Why WAL growing

Additionally, Pg checks value of wal_keep_size. This can be set to arbitrarily large value, but whatever it will be set to, it will be static, so while it can explain why wal dir is large, it can't explain why it's growing.

Generally there are three cases that cause it:

  • archiving is failing
SELECT pg_walfile_name( pg_current_wal_lsn() ), * FROM pg_stat_archiver \gx

Most important pg_walfile_name, which is name of current wal file, and last_archived_wal. Example 1

pg_walfile_name 00000001000123D100000076
last_archived_wal 00000001000123D10000006E

In this case we have 8 wal segments of difference, and since you can't archive current wal (because it's being written to), it looks that we have archive lag of 7.Each file is 16MB, so my archiving lag on this system is/was 112MB.

Example 2

pg_walfile_name 000000020000C6AE00000000
last_archived_wal 000000020000C6AD000000FF

do to hex math, you first change “000000020000C6AE00000000" to “000000020000C6AE00" and “000000020000C6AD000000FF" to “000000020000C6ADFF", and then subtract them from each other.

Check logs of PostgreSQL, or the tool you use for archiving.

SELECT pg_current_wal_lsn(), pg_current_wal_lsn() - restart_lsn, * FROM pg_replication_slots \gx
pg_current_wal_lsn  │ A9C/97446FA8
slot_name           │ my_slot
restart_lsn         │ A9C/97091D60

In here we can see situation where wal is at location A9C/97446FA8 (which means it's in wal file XXXXXXXX00000A9C00000097), but my_slot requires (column restart_lsn) wal since location A9C/97091D60 (which is still in the same WAL file as current one. Sa its ok.

let's consider case where pg_current_wal_lsn is 1A4A5/602C8718 and restart_lsn is 1A4A4/DB60E3B0. In this case replication lag on this slot is 2227938152 bytes (a bit over 2GB). This means that even when archiving is working 100% OK, and min_wal_size, max_wal_size, and wal_keep_size will all be below 200MB, we will still have 2GB of wal files.

If such case happens for you then find out what this slot is being used for, and either fix the replication, so WAL will start being consumed, or just drop the replication slot (using pg_drop_replication_slot() function).

Config

=> ALTER SYSTEM SET full_page_writes = on;
=> ALTER SYSTEM SET wal_compression = on;
=> SELECT pg_reload_conf();
select sum(size) from pg_ls_waldir();
select pg_size_pretty(sum(size)) from pg_ls_waldir()
select * from pg_current_wal_flush_lsn();
select * from pg_current_wal_insert_lsn();
SELECT pg_size_pretty('0/3BE87658'::pg_lsn - '0/3A69C530'::pg_lsn);  # show size in MB
select * from pg_walfile_name() ???
select * from pg_walfile_name(pg_current_wal_lsn()); --run and the wal file that PostgreSQL is currently using


WAL increase

WAL is considered unneeded and to be removed after a Checkpoint is made. This is why we check it first. Postgres has a special system view called pg_stat_bgwriter that has info on checkpoints:

  • checkpoints_timed — is a counter of checkpoints triggered due that the time elapsed from the previous checkpoint is more than pg setting checkpoint_timeout. These are so called scheduled checkpoints.
  • checkpoints_req — is a counter of checkpoints ran due to uncheckpointed WAL size grew to more than max_wal_size setting — requested checkpoints.
  1. WAL Bloats #1 - Long lasting transaction
  2. WAL Bloats #2 — Archiving
  1. WAL Bloats #3 — Replication Slots
pg_replication_slots

Write Ahead Log (WAL) (also called redo logs or transaction logs in other products)

As the DB is operating, blocks of data are first written serially and synchronously as WAL files, then some time later, usually a very short time later, written to the DB data files. Once the data contained in these WAL files has been flushed out to their final destination in the data files, they are no longer needed by the primary.

When Postgres needs to make a change to data, it first records this change in the Write-Ahead-Log (WAL) and fsyncs the WAL to the disk. Fsync is a system call that guarantees that the data that was written has actually been written fully to persistent storage, not just stored in the filesystem cache.

ALTER SYSTEM SET fsync = 'on'; # Enable wal_sync_method setting to fsync
ALTER SYSTEM SET wal_sync_method = 'fsync'; # Options may include fdatasync, open_datasync, fsync, fsync_writethrough, open_sync

By default fsync is enable.

Disabling fsync boosts insertion speed, but at the expense of data durability. Re-enabling fsync with the open_sync option strikes a balance, providing a compromise between performance and data safety. It is crucial to consider these results when configuring fsync based on the specific requirements and priorities of your application.

At some future point, the database will issue a CHECKPOINT, which flushes all of the modified buffers to disk and stores the changes permanently in the data directory. When a CHECKPOINT has been completed, the database records the WAL position at which this occurs and knows that all of this data up to this point has been successfully written to disk. (It is worth noting that any changes that are written to disk via the CHECKPOINT already had their contents recorded previously in the WAL stream, so this is just a way of ensuring that the changes recorded by the WAL data are in fact applied to the actual on-disk relation files.)

WAL is stored in on-disk files in 16MB chunks; as the database generates WAL files, it runs the archive_command on them to handle storing these outside of the database. The archive_command’s exit code will determine whether Postgres considers this to have succeeded or not. Once the archive_command completes successfully, Postgres considers the WAL segment to have been successfully handled and can remove it or recycle it to conserve space. If the archive_command fails, Postgres keeps the WAL file around and tries again indefinitely until this command succeeds. Even if a particular segment file is failing, Postgres will continue to accumulate WAL files for all additional database changes and continue retrying to archive the previous file in the background.

recommend keeping secondary copies of the WAL files in another location using a WAL archiving mechanism. This is known as WAL archiving and is done by ensuring archive_mode is turned on and a value has been set for the archive_command.

WAL archiving and backups are typically used together since this then provides point-in-time recovery (PITR) where you can restore a backup to any specific point in time as long as you have the full WAL stream available between all backups.

Streaming Replication Slots

A replication slot is a persistent communication channel between a primary and a standby server. It is used to keep track of the WAL segments that the standby server has received and applied, and to ensure that the primary server does not remove any WAL segments that are still needed by the standby server.

Streaming replication slots are a feature available since PostgreSQL 9.4. Using replication slots will cause the primary to retain WAL files until the primary has been notified that the replica has received the WAL file.

If the replica is unavailable for any reason, WAL files will accumulate on the primary until it can deliver them. On a busy database, with replicas unavailable, disk space can be consumed very quickly as unreplicated WAL files accumulate. This can also happen on a busy DB with relatively slow infrastructure (network and storage).

Log Sequence Number

Transactions in PostgreSQL create WAL records which are ultimately appended to the WAL log (file). The position where the insert occurs is known as the Log Sequence Number (LSN). The values of LSN (of type pg_lsn) can be compared to determine the amount of WAL generated between two different offsets (in bytes). When using in this manner, it is important to know the calculation assumes the full WAL segment was used (16MB) if multiple WAL logs are used. A similar calculation to the one used here is often used to determine latency of a replica.

The LSN is a 64-bit integer, representing a position in the write-ahead log stream. This 64-bit integer is split into two segments (high 32 bits and low 32 bits). It is printed as two hexadecimal numbers separated by a slash (XXXXXXXX/YYZZZZZZ). The 'X' represents the high 32-bits of the LSN and ‘Y’ is the high 8 bits of the lower 32-bits section. The 'Z' represents the offset position in the file. Each element is a hexadecimal number. The 'X' and 'Y' values are used in the second part of the WAL file on a default PostgreSQL deployment.

# Verify the current LSN from Primary server  
SELECT pg_current_wal_lsn();
# Verify the LAST Receive and Replay LSN from Delay Standby server 
 select now(), 
        pg_is_in_recovery(),
        pg_is_wal_replay_paused(), 
        pg_last_wal_receive_lsn(), 
        pg_last_wal_replay_lsn();

pg_wal_lsn_diff() function indicates the replication lag in bytes between the primary server and the standby server.

WAL File

The WAL file name is in the format TTTTTTTTXXXXXXXXYYYYYYYY. Here 'T' is the timeline, 'X' is the high 32-bits from the LSN, and 'Y' is the low 32-bits of the LSN.

SELECT pg_switch_wal()