postgres pg_dump pg_dumpall - ghdrako/doc_snipets GitHub Wiki
The key difference between pg_dump and pg_dumpall is that pg_dumpall includes cluster-level objects like user roles and permissions while pg_dump focuses on a single database.
Generally need to run pg_dump
on each database individually, plus pg_dumpall -g
to save all of your globals, plus back up any configuration files like pg_hba.conf or postgresql.conf that aren't saved by either pg_dump or by pg_dumpall.
- global objects such as roles and tablespaces are excluded from the dump
pg_dump -Fc | gzip -9 -c > dumpfile.gz
Creating logical backups of your PostgreSQL database
Backing up large databases with pg_dump can be very slow, and restoring them can be very slow, too. Because pg_dump holds an MVCC snapshot while it's running, the lengthy backup can be a problem not only in terms of resource utilization, but also because it could result in database bloat that you subsequently need to clean up with VACUUM or, if things get real bad, maybe even with VACUUM FULL.
if you use pg_dump to dump a database and later try to restore that dump on a newer version of the database, you may get errors. If you take the dump using the version of pg_dump that matches the server version where you intend to restore, this problem shouldn't happen, but if you try to restore an existing, older dump on a new version, it can, and sometimes I ended up doing that, and having to manually adjust the dump to get it to restore.
-t
- what table include
pg_dump -f users.sql -t forum.users -t forum.user_pk_seq forumdb
-T
- what table exclude
pg_dump -f users.sql -T forum.users -T forum.user_pk_seq forumdb
pg_dump -f posts.sql -t forum.posts -a -v forumdb
pg_dump -s -f database_structure.sql forumdb
pg_dump -a -f database_content.sql forumdb
If you migrate the content of the database to another engine, such as another relational database,
you should use options such as --inserts
or --column-inserts
to make the database backup
more portable.
pg_dump provides a special command-line option, -Z, which accepts an integer from 0 to 9 to indicate a compression level for the backup to produce. The level 0 means no compression at all, while 9 is the highest compression available.
pg_dump -f backup_forumdb_compressed.sql.gz -Z 9 forumdb
Compression can be applied to plain text (i.e., SQL) dumps, and to directory format, but not to the tar output format.
All formats except plain SQL must be used with pg_restore for restoration and, therefore, are not suitable for manual editing.
Backup formats are specified by the -F command-line argument to pg_dump, which allows for one of the following values:
- c (custom) is the PostgreSQL-specific format within a single file archive.
- d (directory) is a PostgreSQL-specific format that’s compressed, where every object is split across different files within a directory.
- t (tar) is a .tar uncompressed format that, once extracted, results in the same layout as the one provided by the directory format
pg_dump -Fc --create -f backup_forumdb.backup forumdb
The produced output file is smaller in size than the plain SQL one and can’t be edited as text because it is binary.
To dump a database called mydb into a SQL-script file:
pg_dump mydb > “/opt/mydb_dump.sql”
To reload such a script into a (freshly created) database named newdb:
psql -d newdb -f “/opt/mydb_dump.sql”
To dump a database into a custom-format archive file:
pg_dump -Fc mydb > “/opt/mydb_dump.dump”
To reload an archive file into a (freshly created) database named newdb:
pg_restore -d newdb “/opt/mydb_dump.dump”
To dump a single table named mytab
pg_dump -t mytab mydb > “/opt/mydb_dump.sql”
pg_dump -U postgres -Fc -t sales mydb > sales.dump
pg_restore -U postgres -t sales -d newdb sales.dump
To dump the database with the log file using user test
pg_dump -U test -v -f “/opt/mydb_dump.sql” mydb 2>>“/opt/mydb_log.log”
To dump only structure without data
pg_dump -sU test -v -f “/opt/mydb_dump.sql” mydb 2>>“/opt/mydb_log.log”
To take insert scripts of particular table
pg_dump --column-inserts -a -t mytab -U test mydb > “/opt/mytab_inserts.sql”
To dump only specific tables with data
pg_dump -U test -n schema1 -t BCL_* -f “/opt/BCL_TABLES.sql” mydb
# dump all tables whose names start with emp in the ebk schema, except for the table named employee_log:
$ pg_dump <db> -U <user> -t 'ebk.emp*' -T <schema>.employee_log > db.sql
#To dump all schemas whose names start with east or west and end in gsm, excluding any schemas whose names contain the word test:
$ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
# The same, using regular expression notation to consolidate the switches:
$ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
# To dump all database objects except for tables whose names begin with ts_:
$ pg_dump -T 'ts_*' mydb > db.sql
# To specify an upper-case or mixed-case name in -t and related switches, you need to double-quote the name; else it will be folded to lower case # But double quotes are special to the shell, so in turn they must be quoted.
$ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
# To dump a database into a custom-format archive file:
$ pg_dump -Fc mydb > db.dump
# To dump a database into a directory-format archive:
$ pg_dump -Fd mydb -f dumpdir
# To dump a database into a directory-format archive in parallel with 5 worker jobs:
$ pg_dump -Fd mydb -j 5 -f dumpdi
pg_dump -U <user_name> -h <host> <database> -s -t <table_or_view_names> -f <table_or_view_names>.sql
pg_dump -t 'schema-name.table-name' --schema-only database-name
pg_dump -d appdb -s -t city -t street # generate ddl table city and street
pg_dump -d appdb -s # generate ddl all objects in appdb database
Po utworzenu tabeli i pk na instancji pub mozna latwo taka sama strukture odtworzyc na instancji sub - ekstrakcja ddl i wykonanie na sub
pg_dump -t table1 -s pub -p 5433 --schema-only | psql -p 5434 sub