GCS CloudSQL - ghdrako/doc_snipets GitHub Wiki

Cloud SQL automatically creates a VPC peering, so any attempts to login from another peered VPC will not work. Cloud SQL is running on VM in Google Managed Enviroment (VPC itd)

public ip (connectin from inside or outside google)

Any client making connection to CloudSQL instance hosted with a public ip must be allowed listed throug the use authorized network. By default the allow list is empty and nothing to make connection to CloudSQL instance.

Public ip access pattern:

  • connect client on-prem - the trafic will egres from your on-prem network and access CloudSQL through a public ip on internet
  • connect client in google cloud - the trafic will egres through instance with public ip or CloudNAT and ingres to the CloudSQL instance through a public ip
  • connect client located on Internet - can access CloudSQL instance directly through a public ip
  • connect throug Cloud SQL Proxy

private ip

Configuring your instance with private ip is prefferd when connecting from a clinet with access to VPC either within VPC or connected to VPC through hybrid connection. Private IP Cloud SQL is deployed using Private Service Access. Private Service Access is commonly used private routing method on Google Cloud that allows workloads in your VPC to access Google-managed infrastructure services privately. Private Service Access is implemented as a VPC peering connection between your VPC network and the undelying Google Cloud VPC network where your CloudSQL instance resides.

There are two parts Private Service Access configuration process

  • Allocate ip renge /20 or larger as a Private Service Access range (new or extisting). There is a option to allow google to allocate the range for you - google allocate /20 and use the name default ip range
  • Create VPC Peering You only need to configure Private Service Access once for CloudSQL per VPC network. And the IP range you allocate for Private Service Access will become avaliable for use by all Google Services globally taht support Private Service Access

Private ip access pattern:

  • connect to CloudSQL instance from Internet - no way to access client direcly
  • connection initiated within VPC Google Cloud
    • client connect directly from the directly peering VPC
    • client also can use Cloud SQL Proxy
    • peerd hub and spoke architecture model and you tring to make connection from VPC that is not directly paired to Private Service Access VPC you must use the Proxy due to non-transitive routing nature of Google VPC. Be aware you can also replace peering hub and spoke achitecture with VPN hub and spoke model so the Private Service Access route can be advertised over the VPN.
    • client to connect on prem make sure adertised Private Service Access route over the VPN or InterConnect. If the VLAN or VPC lans in the VPC that is peered directly with Private Service Access VPC then you'll be able to connect directly. If the VLAN or VPC lans in the VPC in another peered VPC then you need to use proxy to connect.

You never be able to ping CloudSQL instance. You only be able to initiate connections through aplication to respective daytabase client or the Cloud SQL Proxy

Security

  • Public IP - you can use Authorized Network to determine list of whitelist of IP address that can acces to your instance
  • No ingres firewall rules to control ingress to CloudSQL
  • You Can use Cloud SQL Proxy to encrypt all traffic to and from the database using TLS and to control who and what can connect to you CloudSQL instance
gcloud sql instance delete <sql-instance-name>
gcloud sql instance delete <sql-instance-name> --impersonate-service-account <sa name> # need Service Account Token Creator Role

SQL Proxy

  • opens a secure and end-to-end encrypted tunnel. In summary, even if your database don’t have SSL certificate, the data are encrypted in transit.
  • Before opening the tunnel, the binary checks against the IAM service API if the current credential is authorized to access the Cloud SQL instance. This is an additional layer of security, in addition to the standard database authentication by user/password.
  • The tunnel can be open on a local port (TCP connection mode) or to a Unix socket (not possible on Windows environment)
  • add additional latency and potential point of failure. It’s matter of tradeoffs.

Serverless services connectivity with Cloud SQL Private IP

Cloud Run, Cloud Functions and App Engine

Solution:

  • Create a serverless VPC connector, in the same region as your serverless service. And, of course, connected to the same VPC as your Cloud SQL instance.
  • Deploy your serverless service with this VPC connector, supported by Cloud Run, Cloud Functions and App Engine
  • In your app, use the Cloud SQL private IP (instead of the Unix socket connection)

https://medium.com/google-cloud/cloud-sql-with-private-ip-only-the-good-the-bad-and-the-ugly-de4ac23ce98a

easy-to-use, serverless migration tool that provides minimal downtime database migration to Cloud SQL for MySQL and Cloud SQL for PostgreSQL.

DMS currently focuses on homogeneous migrations—that is, migrations across compatible database engines, which don’t require transformations such as schema conversion from source to destination. In this case, the goal of the migration is for the migrated database to be as close as possible to a copy of the source database, available in the Cloud SQL destination.

Certyficate

gcloud sql ssl client-certs create CERT_NAME client-key.pem --instance=INSTANCE_NAME # utworzenie certa klienta i zapisanie klucza
gcloud sql ssl client-certs describe CERT_NAME --instance=INSTANCE_NAME --format="value(cert)" > client-cert.pem  # zapisanie certa klienta
gcloud sql instances describe INSTANCE_NAME --format="value(serverCaCert.cert)" > server-ca.pem  # zapisanie certa serwera(instancji)
# gcloud sql ssl-certs create acpkk-dev acpkk-dev-server.pam --instance=acpkk-dev --project=dev-biz-acpkk-gcp-pr
gcloud sql ssl client-certs create acpkk-dev-client acpkk-dev-client-key.pem --instance=acpkk-dev --project=dev-biz-acpkk-gcp-pr # generuje klucz i cert klienta
gcloud sql instances describe acpkk-dev   --project=dev-biz-acpkk-gcp-pr --format="value(serverCaCert.cert)" > acpkk-dev-server-ca.pem

Server certificates

gcloud beta sql ssl server-ca-certs list --instance=INSTANCE_NAME    #  information about the service certificate: 
gcloud beta sql ssl server-ca-certs create --instance=INSTANCE_NAME  #  Create a server certificate
gcloud beta sql ssl server-ca-certs list --format="value(cert)" \
--instance=INSTANCE_NAME > FILE_PATH/FILE_NAME.pem                   # Download the certificate information to a local PEM file

Client certyficate

gcloud sql ssl client-certs create CERT_NAME client-key.pem \    
--instance=INSTANCE_NAME                                             # Create a client certificate

gcloud sql ssl client-certs describe CERT_NAME \
--instance=INSTANCE_NAME \
--format="value(cert)" > client-cert.pem                             # Retrieve the public key for the certificate you just created and copy it into the client-cert.pem

gcloud sql instances describe INSTANCE_NAME \                      
--format="value(serverCaCert.cert)" > server-ca.pem                   # Copy the server certificate into the server-ca.pem

Replication

Cloud SQL supports the following types of replicas:

gcloud sql connect  qwiklabs-demo --user=root
CREATE DATABASE bike;
USE bike;
CREATE TABLE london1 (start_station_name VARCHAR(255), num INT);

USE bike;
CREATE TABLE london2 (end_station_name VARCHAR(255), num INT);

Upload CSV files to tables

In your Cloud SQL instance page, click IMPORT. In the Cloud Storage file field, click Browse, and then click the arrow opposite your bucket name, and then click start_station_data.csv. Click Select. Select the bike database and type in "london1" as your table. Click Import:

IAM Authentication

  • Instance flag
cloudsql.iam_authentication=on
  • Grant “Cloud SQL Instance user” role to the users. This is to allow users to log in to the instance.
  • Add the users to the Cloud SQL.

Navigate to ‘Console → SQL → Select Instance → Users → Add user account→ Cloud IAM → Enter email address → Add’

  • $ gcloud auth login

  • create the proxy connection $ ./cloud_sql_proxy -enable_iam_login -instances=PROJECT:REGION:CLOUDSQL_INSTANCE_NAME=tcp:port

  • login to database

psql "host=127.0.0.1 dbname=[DB_NAME] user=[IAM email account] sslmode=disable"
psql "host=127.0.0.1 port=$CLOUDSQLPROXYPORT dbname=$DBNAME [email protected] sslmode=disable"

or

$ PGPASSWORD=$(gcloud auth print-access-token) 
psql --host=localhost --username="[email protected]" --dbname=$DBNAME --port=$CLOUDSQLPROXYPORT --sslmode=disable
PGPASSWORD=$(gcloud sql generate-login-token) psql --host=HOSTNAME --username=USERNAME --dbname=DATABASE_NAME
unset PGPASSWORD

https://cloud.google.com/sql/docs/postgres/add-manage-iam-users#console

curl -H "Content-Type: application/x-www-form-urlencoded" \
-d "access_token=$(gcloud auth application-default print-access-token)" \
https://www.googleapis.com/oauth2/v1/tokeninfo

verify the IAM user or service account is granted the cloudsql.instances.login permission using the predefined Cloud SQL Instance User role or custom role in the IAM policy of the instance's project.

  • DBviver
export INSTANCE="..."
export PGPASSWORD=$(gcloud auth print-access-token) 
export DBNAME=postgres
export IAMUSER=mydbuser
/Applications/DBeaver.app/Contents/MacOS/dbeaver -con \
"driver=PostgreSQL|prop.ssl=true|prop.sslmode=verify-full|prop.sslfactory=org.postgresql.ssl.jdbc4.LibPQFactory|prop.sslrootcert=/custom-dir/us-east-1-bundle.pem|host=$INSTANCE|port=5432|database=$DBNAME|user=$IAMUSER|password=$PGPASSWORD"
  • SQL Workbench/J
export INSTANCE="..."
export PGPASSWORD=$(gcloud auth print-access-token) 
export DBNAME=postgres
export IAMUSER=mydbuser

java -jar ~/workbench/sqlworkbench.jar \
    -url=jdbc:postgresql://$INSTANCE:5432/$DBNAME \

Export/Import dump

Parameters:

  • --async - nie blokuje konsoli
  • --offload - nie zakluca pozostalych procesow dzialajacych na bazie
  • --table -t - lista tabel do eksportu -t public.test
gcloud sql export sql INSTANCE URI [--async] [--database=DATABASE,[DATABASE,…], -d DATABASE,[DATABASE,…]] [--offload] [--table=TABLE,[TABLE,…], -t TABLE,[TABLE,…]] [GCLOUD_WIDE_FLAG …]

gcloud sql import sql INSTANCE URI [--async] [--database=DATABASE, -d DATABASE] [--user=USER] [GCLOUD_WIDE_FLAG …]

gcloud sql export sql INSTANCE_NAME gs://BUCKET_NAME/sqldumpfile.gz --database=DATABASE_NAME --offload

gcloud sql import sql INSTANCE_NAME gs://BUCKET_NAME/IMPORT_FILE_NAME --database=DATABASE_NAME

gcloud sql export sql db2-tst-biz-gcp-pr gs://migracja-tst/Cloud_SQL_Export_2023-05-05 (15:00:12).gz --database ebkdb --user ebkadm #-t public.test
gcloud sql import sql db2-lab-biz-gcp-pr "gs://migracja-lab/Cloud_SQL_Export_2023-05-05 (15:00:12).sgz" --database=ebkdb --user postgres --async

Export/Import csv

gcloud sql import csv <instance_name>-gs://<backet_name>/<PATH>/TGEN008#A.gz --database=ebkdb --table=aci_tmp_db.e008 --fields-terminated-by="3B" --user=ebkadm --project dev-biz-acpkk-gcp-pr --escape="5C"
gcloud sql export csv INSTANCE_NAME gs://BUCKET_NAME/FILE_NAME \
--database=DATABASE_NAME \
--offload \
--query=SELECT_QUERY
gcloud sql export csv INSTANCE_NAME gs://BUCKET_NAME/FILE_NAME \
--database=DATABASE_NAME \
--offload \
--query=SELECT_QUERY \
--quote="22" \
--escape="5C" \
--fields-terminated-by="2C" \
--lines-terminated-by="0A"
function uploadtoCloudSQL() {
  // Written by Amit Agarwal [email protected]
  // Web: www.ctrlq.org

  var service = getService();

  if (!service.hasAccess()) {
    Logger.log(service.getAuthorizationUrl());
    return;
  }

  var token = service.getAccessToken();

  // Getting list of files to be processed
  var result = JSON.parse(
    UrlFetchApp.fetch('https://www.googleapis.com/storage/v1/b/BUCKET_NAME/o', {
      method: 'GET',
      headers: {
        Authorization: 'Bearer ' + token,
      },
    }).getContentText()
  );

  for (var i = 0; i < result.items.length; i++) {
    if (result.items[i].name.indexOf('.') !== -1) {
      files.push(result.items[i].name);
    }
  }

  for (var f = 0; f < files.length; f++) {
    var path = files[f].split('/');

    var payload =
      '{"importContext" : { "csvImportOptions": {"table":"MY_TABLE"}, "fileType": "CSV", "database": "MY_DATABASE", "uri": "gs://BUCKET_NAME/FOLDER/CSVFILE"}}'
        .replace('FOLDER', path[0])
        .replace('CSVFILE', path[1]);

    UrlFetchApp.fetch('https://www.googleapis.com/sql/v1beta4/projects/PROJECT/instances/INSTANCE/import', {
      method: 'POST',
      contentType: 'application/json',
      headers: {
        Authorization: 'Bearer ' + token,
      },
      payload: payload,
      muteHttpExceptions: true,
    });

    UrlFetchApp.fetch('https://www.googleapis.com/storage/v1/b/BUCKET_NAME/o/' + encodeURIComponent(files[f]), {
      method: 'DELETE',
      headers: {
        Authorization: 'Bearer ' + token,
      },
    });

    // Wait for the previous import job to end
    Utilities.sleep(5000);
  }
}

Shrink

gcloud alpha sql instances perform-storage-shrink <INSTANCE_NAME>
  1. Shutting down the database
  2. Taking an internal backup as a restore point (This backup isn't visible to customers)
  3. Creating a new disk of smaller size
  4. Copying the database files to the new disk
  5. Mounting the new disk on the instance and starting up the database

Drugą opcją z minimalnym downtime jest użycie Database Migration Service i migracja danych na nową instancję bazy danych:

Restoring backup in different backup

https://itnext.io/restoring-cloudsql-in-different-region-ff3c8bb179ab

Encrypt and Re-encrypt

IAM auth

export USERNAME=$(gcloud config list --format="value(core.account)")
export CLOUDSQL_INSTANCE=postgres-orders
export POSTGRESQL_IP=$(gcloud sql instances describe $CLOUDSQL_INSTANCE --format="value(ipAddresses[0].ipAddress)")
export PGPASSWORD=$(gcloud auth print-access-token)
psql --host=$POSTGRESQL_IP $USERNAME --dbname=orders

export PGPASSWORD=$(gcloud auth print-access-token)
psql --host=$POSTGRESQL_IP $USERNAME --dbname=orders

PgAudit

Query in gcp logging

resource.type="cloudsql_database"
logName="projects/(GCP Project)/logs/cloudaudit.googleapis.com%2Fdata_access"
protoPayload.request.@type="type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry"
⚠️ **GitHub.com Fallback** ⚠️