postgres pg_dump - ghdrako/doc_snipets GitHub Wiki

-t - what table include

pg_dump -f users.sql -t forum.users -t forum.user_pk_seq forumdb

-T - what table exclude

pg_dump -f users.sql -T forum.users -T forum.user_pk_seq forumdb
pg_dump -f posts.sql -t forum.posts -a -v forumdb

dump only the database schema without any data in it

pg_dump -s -f database_structure.sql forumdb

dump the database content without any DDL statement

pg_dump -a -f database_content.sql forumdb

If you migrate the content of the database to another engine, such as another relational database, you should use options such as --inserts or --column-inserts to make the database backup more portable.

Compression

pg_dump provides a special command-line option, -Z, which accepts an integer from 0 to 9 to indicate a compression level for the backup to produce. The level 0 means no compression at all, while 9 is the highest compression available.

pg_dump -f backup_forumdb_compressed.sql.gz -Z 9 forumdb

Compression can be applied to plain text (i.e., SQL) dumps, and to directory format, but not to the tar output format.

Dump formats

All formats except plain SQL must be used with pg_restore for restoration and, therefore, are not suitable for manual editing.

Backup formats are specified by the -F command-line argument to pg_dump, which allows for one of the following values:

  • c (custom) is the PostgreSQL-specific format within a single file archive.
  • d (directory) is a PostgreSQL-specific format that’s compressed, where every object is split across different files within a directory.
  • t (tar) is a .tar uncompressed format that, once extracted, results in the same layout as the one provided by the directory format
pg_dump -Fc --create -f backup_forumdb.backup forumdb

The produced output file is smaller in size than the plain SQL one and can’t be edited as text because it is binary.

To dump a database called mydb into a SQL-script file:

pg_dump mydb > “/opt/mydb_dump.sql”

To reload such a script into a (freshly created) database named newdb:

psql -d newdb -f “/opt/mydb_dump.sql”

To dump a database into a custom-format archive file:

pg_dump -Fc mydb > “/opt/mydb_dump.dump”

To reload an archive file into a (freshly created) database named newdb:

pg_restore -d newdb “/opt/mydb_dump.dump”

To dump a single table named mytab

pg_dump -t mytab mydb > “/opt/mydb_dump.sql”
pg_dump -U postgres -Fc -t sales mydb > sales.dump
pg_restore -U postgres -t sales -d newdb sales.dump

To dump the database with the log file using user test

pg_dump -U test -v -f “/opt/mydb_dump.sql” mydb 2>>“/opt/mydb_log.log”

To dump only structure without data

pg_dump -sU test -v -f “/opt/mydb_dump.sql” mydb 2>>“/opt/mydb_log.log”

To take insert scripts of particular table

pg_dump --column-inserts -a -t mytab -U test mydb > “/opt/mytab_inserts.sql”

To dump only specific tables with data

pg_dump -U test -n schema1 -t BCL_* -f “/opt/BCL_TABLES.sql” mydb
# dump all tables whose names start with emp in the ebk schema, except for the table named employee_log:
$ pg_dump  <db> -U <user> -t 'ebk.emp*' -T <schema>.employee_log  > db.sql


#To dump all schemas whose names start with east or west and end in gsm, excluding any schemas whose names contain the word test:
$ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql


# The same, using regular expression notation to consolidate the switches:
$ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql

# To dump all database objects except for tables whose names begin with ts_:
$ pg_dump -T 'ts_*' mydb > db.sql

# To specify an upper-case or mixed-case name in -t and related switches, you need to double-quote the name; else it will be folded to lower case  # But double quotes are special to the shell, so in turn they must be quoted. 
$ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql


# To dump a database into a custom-format archive file:
$ pg_dump -Fc mydb > db.dump

# To dump a database into a directory-format archive:
$ pg_dump -Fd mydb -f dumpdir

# To dump a database into a directory-format archive in parallel with 5 worker jobs:
$ pg_dump -Fd mydb -j 5 -f dumpdi

Extract ddl

pg_dump -U <user_name> -h <host> <database> -s -t <table_or_view_names> -f <table_or_view_names>.sql
pg_dump -t 'schema-name.table-name' --schema-only database-name
pg_dump -d appdb -s -t city -t street  # generate ddl table city and street
pg_dump -d appdb -s                    # generate ddl all objects in appdb database


⚠️ **GitHub.com Fallback** ⚠️