gcp cloudsql upgrade migration - ghdrako/doc_snipets GitHub Wiki
https://www.postgresql.org/docs/current/upgrading.html
When you perform an in-place major version upgrade, Cloud SQL retains your database settings, including your instance name, IP address, explicitly configured flag values, and user data. However, the default value of the system variables might change. For example, the default value of the password_encryption flag in PostgreSQL 13 and earlier is md5. When you upgrade to PostgreSQL 14, the default value of this flag changes to scram-sha-256.
-
https://cloud.google.com/sql/docs/postgres/upgrade-major-db-version-migrate
- Database Migration Service (DMS)
- exporting data from the current instance and importing that data into a new instance running the later version of PostgreSQL
Cloud SQL publishes error logs to projects/PROJECT_ID/logs/cloudsql.googleapis.com%2Fpostgres-upgrade.log.
- prepared statement the root cause of the Major version upgrade failure is the presence of prepared transactions in the source cluster.
Prepared transactions are part of PostgreSQL's two-phase commit (2PC) mechanism, and they must be resolved (either committed or rolled back) before performing the upgrade. The pg_upgrade utility will fail if there are any unresolved prepared transactions because they can cause data inconsistency during the migration.
Check for Prepared Transactions: you can use the following query to check if there are any unresolved prepared transactions:
SELECT * FROM pg_prepared_xacts;
Resolve Transactions: If prepared transactions are found, you need to either commit or roll them back:
To commit:
COMMIT PREPARED 'transaction_id';
To rollback:
ROLLBACK PREPARED 'transaction_id';
After resolving the prepared transactions, you should be able to proceed with the upgrade process.
- unloged tabeles with loged index discrepancies in the persistence characteristics between tables and their sequences for generated columns [1].
To mitigate this issue, unlogged sequences were introduced in PostgreSQL v15+. To address this, either drop the unlogged tables associated with these generated columns (assuming they can be recreated without any data loss) or temporarily convert them to Permanent tables before attempting MVU.
ALTER TABLE <TABLE_NAME> SET LOGGED;
You can identify all such tables using the query below:
SELECT
c.relname AS table_name,
a.attname AS column_name,
a.attidentity AS identity_type
FROM
pg_catalog.pg_class c
JOIN pg_catalog.pg_attribute a ON a.attrelid = c.oid
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE
a.attidentity IN ('a', 'd') -- 'a' for GENERATED ALWAYS, 'd' for GENERATED BY DEFAULT
AND c.relkind = 'r' -- Only consider ordinary tables
AND c.relpersistence = 'u' -- Include unlogged tables
ORDER BY
c.relname, a.attname;
Usable pg_dump format for pg_restore: custom
, directory
pg_dump \
-U USERNAME \
--format=custom \
--no-owner \
--no-acl \
DATABASE_NAME > DATABASE_NAME.dmp
parralel export
pg_dump \
-U USERNAME \
--format=directory \
--no-owner \
--no-acl \
--jobs=<njobs>
--file=<directory-to-export>
DATABASE_NAME
pg_restore \
--list DATABASE_NAME.dmp | sed -E 's/(.* EXTENSION )/; \1/g' > DATABASE_NAME.toc
The export sql command does not contain triggers or stored procedures, but does contain views. To export triggers and/or stored procedures, use the pg_dump tool.
gcloud sql export sql INSTANCE_NAME gs://BUCKET_NAME/sqldumpfile.gz \
--database=DATABASE_NAME \
--offload
gcloud sql import sql INSTANCE_NAME gs://BUCKET_NAME/IMPORT_FILE_NAME \
--database=DATABASE_NAME
- Check for tables without a primary key
SELECT tab.table_schema, tab.table_name
FROM information_schema.tables tab
LEFT JOIN information_schema.table_constraints tco
ON tab.table_schema = tco.table_schema
AND tab.table_name = tco.table_name
AND tco.constraint_type = 'PRIMARY KEY'
WHERE tab.table_type = 'BASE TABLE'
AND tab.table_schema NOT IN('pg_catalog', 'information_schema', 'pglogical')
AND tco.constraint_name IS NULL
ORDER BY table_schema, table_name;
- Speeding Up Initialization - disable not PK index in replicated db
UPDATE pg_index SET indisready = FALSE WHERE indexrelid IN (
SELECT idx.indexrelid FROM pg_index idx
INNER JOIN pg_class cls ON idx.indexrelid = cls.oid
INNER JOIN pg_namespace nsp ON cls.relnamespace = nsp.oid
WHERE nsp.nspname !~ '^pg_'
AND nsp.nspname NOT IN ('information_schema', 'pglogical')
AND idx.indisprimary IS FALSE
);
- Initialization - Migrate users and structure
pg_dumpall -h localhost -U myuser — globals-only -f pg_users_roles_backup.sql
pg_dump - file=/postgres_20_03_20_06-dump.sql - schema-only - dbname=postgres - username=postgres - host=127.0.0.1 - port=5432
# first you must be sure that the users and roles match with master - create it with CREATE USER command or pg_dumpall
pg_restore - dbname=postgres - clean - username=postgres - host=127.0.0.1 - port=5454 /postgres10_33_35-dump.sql
-
Enable logical replication
-
Wait for initialization to complete
SELECT * FROM pglogical.local_sync_status WHERE NOT sync_status = 'r';
- After inicialization
SELECT CONCAT('"', nsp.nspname, '"."', cls.relname, '"') FROM pg_index idx
INNER JOIN pg_class cls ON idx.indexrelid = cls.oid
INNER JOIN pg_namespace nsp ON cls.relnamespace = nsp.oid
WHERE nsp.nspname !~ '^pg_'
AND nsp.nspname NOT IN ('information_schema', 'pglogical')
AND idx.indisprimary IS FALSE
AND idx.indisready IS FALSE;
REINDEX INDEX CONCURRENTLY $index;
- Switch to new wersion
- https://cloud.google.com/blog/topics/developers-practitioners/scheduling-cloud-sql-exports-using-cloud-functions-and-cloud-scheduler
- https://cloud.google.com/sql/docs/postgres/backup-recovery/scheduling-backups
-
gcloud sql backups Provide commands for working with backups of Cloud SQL instances.(create,restore,list,describe,delete)
-
gcloud sql export Provide commands to export Cloud SQL instances
-
gcloud sql import Provides commands to import Cloud SQL instances.
-
https://cloud.google.com/sql/docs/postgres/import-export/import-export-sql
gcloud sql operations list \
--instance INSTANCE_NAME \
--limit 10
-
COPY - export in CSV,TXT,Binary https://wiki.postgresql.org/wiki/COPY
\copy
is a psql comman.\copy
uses files on the filesystem of the computer running psql, andcopy
uses files on the server's filesystem. - pg_restore
- pg_dump
- pg_dump_all
- pg_upgrade