Google Cloud Platform - dennisholee/notes GitHub Wiki
List services along with status
gcloud services list --available --format 'table(config.name,config.title,state)'
gcloud services list --available --format 'table(config.name,config.title,state)' | head NAME TITLE STATE abusiveexperiencereport.googleapis.com Abusive Experience Report API DISABLED acceleratedmobilepageurl.googleapis.com Accelerated Mobile Pages (AMP) URL API DISABLED accessapproval.googleapis.com Access Approval API DISABLED accesscontextmanager.googleapis.com Access Context Manager API DISABLED actions.googleapis.com Actions API DISABLED adexchangebuyer-json.googleapis.com Ad Exchange Buyer API DISABLED adexchangebuyer.googleapis.com Ad Exchange Buyer API II DISABLED adexchangeseller.googleapis.com Ad Exchange Seller API DISABLED adexperiencereport.googleapis.com Ad Experience Report API DISABLED
Enable services
gcloud services enable {config.name}
List enabled services
gcloud alpha services list
Create Kubernetes Cluster
gcloud container clusters create mycluster --zone=asia-southeast1-a --machine-type=f1-micro --num-nodes=3
Add node pool
gcloud container node-pools create tnodepool --zone asia-east2-a --num-nodes 3 --machine-type f1-micro
Resize node pool
gcloud container clusters resize ktest --size 2 --node-pool tnodepool --zone asia-east2-a
Enable node autoscale
gcloud container clusters update ktest --zone asia-east2-a --enable-autoscaling --max-nodes 4
List Kubernetes Cluster
gcloud container clusters list
View application (pods) logs
kubectl logs -l app={pod_name}
Import GCP Service Account key Import the account key to container cluster
# Create the service account
gcloud iam service-accounts create sv-dev-ac --display-name sv-dev-ac
gcloud iam service-accounts list
# Create service account key
gcloud iam service-accounts keys create sv-dev-ac.json --iam-account sv-dev-ac@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com --type json
# Import service account key
kubectl create secret generic my-dev-ac --from-file sv-dev-ac.json --type json
kubectl get secrets my-dev-ac -o=json
Sample output:
{
"apiVersion": "v1",
"data": {
"my-dev-ac.json": <<base64 encoded key>>
},``
"kind": "Secret",
"metadata": {
"creationTimestamp": "2019-01-02T02:32:04Z",
"name": "my-dev-ac",
"namespace": "default",
"resourceVersion": "1753",
"selfLink": "/api/v1/namespaces/default/secrets/my-dev-ac",
"uid": "978c9034-0e36-11e9-864c-42010aaa0025"
},
"type": "json"
}
Note:
- The "my-dev-ac.json" value in the "data" segment will be used in the application's kubernetes file
Example case - access pubsub resources from cluster https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
- Enable pubsub subscription permissions on the service account
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:my-dev-ac@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com --role roles/pubsub.subscriber
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:my-dev-ac@{DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com --role r.iam.gserviceaccount.com --role roles/pubsub.viewer
- Define the application's kubernetes configuration file
apiVersion: apps/v1
kind: Deployment
metadata:
name: pubsub
spec:
selector:
matchLabels:
app: pubsub
template:
metadata:
labels:
app: pubsub
spec:
volumes:
- name: google-cloud-key
secret:
secretName: my-dev-ac
containers:
- name: subscriber
image: gcr.io/google-samples/pubsub-sample:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/my-dev-ac.json
Note:
- spec.template.spec.volumes.secret.secretName and spec.template.spec.volumes.containers.env.value refers to the secret captured from
kubectl get secrets my-dev-ac -o=json
- Download NodeJS src
git clone https://github.com/dennisholee/mob_api_poc.git ~/
- Create Dockefile
FROM node:8
# Create app directory
WORKDIR /tmp/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
- Build docker image
docker build -t mob-api .
- (Optional) Login to private google repository
gcloud auth configure-docker
- Tag and push docker image for push to registry
docker tag mob-api gcr.io/mob-api:v1
docker push gcr.io/mob-api:v1
- Setup and deploy docker image to kubernetes cluster
gcloud container clusters create kcluster --zone asia-east2-a --machine-type f1-micro --num-nodes 3
gcloud container clusters get-credentials kcluster --zone asia-east2-a
kubectl run mob-api --image gcr.io/mob-api:v1
- Expose services to public internet
gcloud expose deployment mob-api --type LoadBalancer --port 80 --target-port 3000
<---------- to be updated ------------> NodeJS publish message
- Create service account
gcloud iam service-accounts create sv-my-dev --display-name sv-my-dev
export GOOGLE_APPLICATION_CREDENTIALS=$(PWD)/sv-my-dev.json
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:sv-my-dev@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com --role roles/pubsub.viewer
- Create topic and subscription
gcloud pubsub subscriptions create my-subscription
gcloud pubsub subscriptions create my-subscription --topic my-topic
gcloud alpha pubsub subscriptions update my-subscription --push-endpoint https://my-subscription.${DEVSHELL_PROJECT_ID}.appspot.com/endpoint
<------------------------------------>
Error
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=
(1) insufficient regional quota to satisfy request: resource "CPUS": request requires '9.0' and is short '1.0'. project has a quota of '8.0' with '8.0' av
ailable Check whether "region" flag is used as this may mean multiple zone replication and cause quota issue:
Docker Management Register auth
gcloud auth configure-docker
- Enabling IP forwarding for instances
--can-ip-forward
- Instance not found
gcloud compute instances stop instance-1
Did you mean zone [asia-east1-a] for instance: [instance-1] (Y/n)? Y
ERROR: (gcloud.compute.instances.stop) HTTPError 404: The resource 'projects/{project}/zones/asia-east1-a/instances/instance-1' was not found
Check zone is correct gcloud compute instances stop instance-1 --zone asia-east2-a
- Stop instance
gcloud compute instances stop {INSTANCE_NAME} --zone {ZONE}
- Set machine type
cloud compute instance set-machine-type {INSTANCE_NAME} --zone {ZONE} --machine-type {MACHINE_TYPE}
- Start instance
gcloud compute instances start {INSTANCE_NAME} --zone {ZONE}
-
(Optional) Stop instance
gcloud compute instances stop {instance name} --zone {zone}
Note: List instancesgcloud compute instances list
-
List instance images
gcloud compute disk list
-
gcloud compute disks snapshot {instance_name} --snapshot-name {snapshot_name} --zone {zone}
- Stop compute instance
gcloud compute instances stop {instance name} --zone {zone}
- Create disk from snapshot
gcloud compute disk create {disk_name} --zone {target compute instance zone} --source-snapshot {snapshot-name}
- Detach existing disk
gcloud compute instances detach-disk {instance_name} --disk {disk_name} --zone {zone}
Note: disk section is missing when runninggcloud compute instances describe {instance_name}
- Attach new disk
gcloud beta compute instances attach-disk {instance_name} --zone {zone} --disk {disk_name}
- Create disk
gcloud compute disks create {disk_name} --type pd-standard --zone {zone}
- Attach disk
gcloud compute instances attach-disk {instance_name} --zone {zone} --disk {disk_name}
- Check disk attached
gcloud compute instances describe --zone {zone}
At least two disk should be listed.
The following should be carried out on the instance prompt:
# List the device
sudo lsblk
# Format the device
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/[DEVICE_ID]
For details on registering the device to file system table: https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
- (Optional) Create snapshot in source project if one does not exists
gcloud compute instances snapshot {instance_name} --zone {zone} --snapshot-name {snapshot_name}
- List the snapshot URI
gcloud compute snapshots list --uri
Carry out the following in the target project console ...
- Create disk from snapshot
gcloud compute disks create {disk_name} --source-snapshot {source_disk_uri}
- Create instance
gcloud compute instances create {instance_name} --zone {zone} --machine-type f1-micro --disk name={disk_name},boot=yes
gcloud compute instance-templates create {template_name} --machine-type {machine_type}
gcloud compute instance-templates delete {template_name}
gcloud compute instance-groups managed create {group_name} --zone {zone} --template {instance_template_name} --size {instance_count}
gcloud compute instance-groups list
gcloud compute instance-groups managed {group_name}
Prepare the Ngnix image
- Create instance
gcloud compute instances create ngnix-img-instance --zone asia-east2-a --machine-type f1-micro --tags http-server
- Install ngnix in new instance
gcloud compute ssh ngnix-img-instance
sudo apt-get update
sudo apt-get install nginx -y
sudo service nginx status
- Create snapshot of disk
gcloud compute disks list
gcloud compute disks snapshot ngnix-img-snapshot --snapshot-names ngnix-img-snapshot
- Create image from snapshot
gcloud compute images create ngnix-img --source-snapshot ngnix-img-snapshot
- Create instance template
gcloud compute instance-templates create ngnix-tmpl --machine-type f1-micro --image ngnix-img-snapshot --tags http-server
- Create instance group from template
gcloud compute instance-groups create ngnix-mgt-grp --zone asia-east2-a --template ngnix-tmpl --size 2
Create load balancing on the ngnix instances
- Create health check to associate to backend service
gcloud compute health-checks create http ngnix-grp-healthcheck
- Create backend service
gcloud compute backend-services create ngnix-bk-srv --health-check ngnix-grp-healthcheck --global
- Add instance group to backend service
gcloud compute backend-services add-backend ngnix-bk-srv --global --instance-group ngnix-mgt-grp --instance-group-zone asia-east2-a
- Create URL map the backend service
gcloud compute map-urls create ngnix-map-url --default-service ngnix-bk-srv
- Register proxy to direct traffic to map-urls
gcloud compute target-http-proxies create ngnix-proxy --url-map ngnix-map-url
- Register public IP address for the load balancer
gcloud compute addresses create ngnix-lb --global
- Create forwarding rule from public IP address to proxy
gcloud compute forwarding-rule create ngnix-fwd-rule --global --address ngnix-lb --target-http-proxy ngnix-proxy --port 80
- Create instance template
gcloud compute instance-template create my-template-1 --machine-type f1-micro --image ubuntu-minimal-1604-xenial-v20181203 --image-project ubuntu-os-cloud
- Create instance group
gcloud compute instance-groups managed create my-instance-template --zone asia-east2-a --template my-template-1 --size 3
- Create new instance template
gcloud compute instance-template create my-template-2 --machine-type g1-small --image ubuntu-minimal-1604-xenial-v20181203 --image-project ubuntu-os-cloud
- Perform rolling patch
gcloud beta compute instance-groups managed rolling-action start-update my-instance-group --zone asia-east2-a --version template=my-template-2
- Create compute instance
gcloud compute instances create instance --zone asia-east2-a --machine-type f1-micro
- SSH to new instance
gcloud compute ssh instance --zone asia-east2-a
- Install NodeJS (Follow instructions from https://github.com/nodesource/distributions/blob/master/README.md
curl -sL https://deb.nodesource.com/setup_11.x | sudo -E bash -
sudo apt-get install -y nodejs
# Using Debian, as root
curl -sL https://deb.nodesource.com/setup_11.x | bash -
apt-get install -y nodejs
- Install git cli
sudo apt-get install git -y
- Download api git repo
git clone https://github.com/dennisholee/mob_api_poc.git ~/mob_api_poc
- Install dependencies
cd ~/mob_api_poc
npm install
npm start
Note: If the "error 404 Not Found: [email protected]" occurs then update the nodemon
npm uninstall nodemon
npm i -D nodemon
- Add firewall rule
gcloud compute firewall-rules create nodejs-rule --allow tcp:3000 --target-tags nodejs-rule
gcloud compute instances add-tags instances --zone asia-east2-a --tags nodejs-rule
- (Optional) Create the updated template if one does not exists
- List available instance template
gcloud compute instance-templates list
- Update instance group configuration to use new template
gcloud compute instance-groups managed set-instance-template ngnix-mgt-grp --zone asia-east2-a --template ngnix-tmpl-new
- Perform rolling update
gcloud beta compute instance-groups managed rolling-action start-update ngnix-mgt-grp --zone asia-east2-a --version='template=ngnix-tmpl-new'
-
Create firewall
gcloud compute firewall-rules create my-http-rule2 --target-tags my-http-rule2 --source-ranges "0.0.0.0/0" --allow tcp:80
-
Assign tag to instance
gcloud compute instances add-tags tagged-instance --tags my-http-rule2 --zone asia-east2-a
gcloud compute firewall-rules list
gcloud compute firewall-rules delete {firewall-rule}
NB: For bootstrap issues etc.
gcloud compute instances add-metadata {instance_name} --metadata serial-port-enable=1 --zone asia
-east2-a
ERROR: (gcloud.app.deploy) Error Response: [7] The user bookself-project-######@appspot.gserviceaccount.com must have permission to read the image at eu.gcr.io/bo
okself-project-######/appengine/default.20181115t105853:latest
Grant access "editor" permission to service account gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} --member serviceAccount:bookself-project-######@appspot.gserviceaccount.com --role roles/editor
Create Projects
gcloud projects create --name
List projects
gcloud projects list
Set project
gcloud config set project
Regions List regions
gcloud compute regions list
Display configuration
gcloud config list
List machine types
gcloud compute machine-types list
List standard images
gcloud compute images list
Create compute instance
gcloud compute instances create --machine-type=f1-micro --zone=asia-southeast1-a
GCP Quota
gcloud compute project-info describe
Google shell configuration (gcloud)
gcloud config set compute/zone asia-east-2a # Set the default zone
Make bucket
gsutil mb -l {location} -p {project ID} gs://{global unique name}
Location: https://cloud.google.com/storage/docs/bucket-locations
List storage (buckets)
gsutil ls -p project_id -l
gsutil ls gs://
Describe bucket gsutil ls -L -b gs://{bucket_name}
Remove bucket gsutil rm -r gs://{bucket_name}
You can only set customer-supplied encryption keys on individual objects. You cannot set a default customer-supplied encryption key for a bucket. https://cloud.google.com/storage/docs/encryption/customer-supplied-keys
Encrypt bucket with customer managed encryption key Create key in KMS
- Create keyring
gcloud kms keyrings create my-keyring --location asia-east2
- Create encryption key
gcloud kms keys create gcs-secret --location asia-east2 --keyring my-keyring --purpose encryption
- Set gcloud storage default key
gsutil kms encryption -k {key_path} gs://{bucket_name}
Encrypt object with customer supplied encryption key
- Generate AES key if there isn't one
git clone https://github.com/dennisholee/common-utils.git
cd common-utils
npm run genEncKey
- Create boto config file "my-boto.cfg" as follows
[GSUtil]
encryption_key={AES key}
- Upload object using the new key
BOTO_PATH={my-boto.cfg} gsutil cp {file} gs://{bucket}
- Create role "my_project_dev"
gcloud iam roles create my_project_dev --project $DEVSHELL_PROJECT_ID
gcloud iam roles list --project $DEVSHELL_PROJECT_ID
- Add user to role
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member user:{user_email} --role projects/$DEVSHELL_PROJECT_ID/roles/my_project_dev
gcloud projects get-iam-policy $DEVSHELL_PROJECT_ID
- Grant bucket view access to role
gcloud iam roles update my_project_dev --project $DEVSHELL_PROJECT_ID --add-permissions storage.buckets.list,storage.objects.get
- Create bucket
gsutil mb -l asia-east2 gs://$DEVSHELL_PROJECT_ID
- Upload document and try to fetch artifact
gsutil cp {document_path} gs://$DEVSHELL_PROJECT_ID
Fetch document: https://console.cloud.google.com/storage/{PROJECT_ID}
- Create the primary and its corresponding log buckets
gsutil mb -l asia-east2 gs://${DEVSHELL_PROJECT_ID}
gsutil mb -l asia-east2 gs://${DEVSHELL_PROJECT_ID}_log
gsutil list
- Enable log updates to the log bucket
gsutil acl ch -g [email protected]:W gs://${DEVSHELL_PROJECT_ID}_log
- Set the private access to the log bucket
gsutil defacl set project-private gs://${DEVSHELL_PROJECT_ID}_log
- Turn on logging
gsutil logging set on -b gs://${DEVSHELL_PROJECT_ID}_log gs://${DEVSHELL_PROJECT_ID}
gsutil ls -L -b gs://${DEVSHELL_PROJECT_ID} # Note "Logging configuration: Present"
- Create compute instance
gcloud compute instances create my-instance --zone asia-east2-a --machine-type f1-micro
- Add firewall rule
gcloud compute firewall-rules create fw-node-app --allow tcp:3000 --target-tags fw-node-app
gcloud compute instances add-tags my-instance --zone asia-east2-a --tags=fw-node-app
- Create service account for application to access cloud storage with appropriate permissions (note the permission should be more fine-grain)
gcloud iam service-accounts create sv-my-dev
gsutil iam ch serviceAccount:sv-my-ac@{DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com:objectCreator,objectViewer gs://$DEVSHELL_PROJECT_ID
- Create service account sv-my-dev's key for NodeJS
gcloud iam service-accounts keys create sv-my-dev.json --iam-account sv-my-ac@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com
- Login to "my-instance"
gcloud compute ssh my-instance --zone asia-east2-a
- Via GCP cloud shell upload the file "sv-my-dev.json" to compute engine "my-instance"
The following to be executed on compute engine.
- Define the environment variable to indicate the location of the service account credential
export GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/sv-my-dev.json
- Prep the compute engine and setup the application
sudo apt-get update -y
curl -sL https://deb.nodesource.com/setup_11.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo apt-get install git -y
git clone https://github.com/dennisholee/my-gcp-gcs.git
- Clone the "env/template.properties" to "env/development.properties" and update the bucket name
[main]
app.port=3000
[gcp]
gcs.url=https://console.cloud.google.com/storage/
gcs.bucketName={BUCKET_NAME}
- Install the application dependencies and start
npm install
npm run dev=development
Error case:
CommandException: "logging set on" command spanning providers not allowed.
Make sure the logging bucket exists and defined in the CLI argument.
To enable Cloud SQL API gcloud services enable sqladmin.googleapis.com
gcloud sql instances create my-instance --tier db-f1-micro --region east-asia2
gcloud sql instances create my-instance --tier db-f1-micro --region east-asia2 --databases-version POSTGRES_9_6
gcloud sql database create my-database --instance my-instance
gcloud sql users set-password root --host=% --instance={INSTANCE_NAME} --prompt-for-password
https://cloud.google.com/sql/docs/mysql/import-export/creating-sqldump-csv#std
mysqldump
--databases [DATABASE_NAME] \
-h [INSTANCE_IP] \
-u [USERNAME] -p \
--hex-blob \ # Export binary
--skip-triggers \ # Ignore trigger
--single-transaction
--set-gtid-purged=OFF \ # Global transaction ID
--ignore-table [VIEW_NAME1] [...] \
--default-character-set=utf8mb4 > [SQL_FILE].sql
https://www.slideshare.net/MyDBOPS/mysql-gtid-concepts-implementation-and-troubleshooting
Global Transaction Identifier (GTID) is a unique identified created for each transaction committed on the server. This identifier is unique not only to the server on which it originated but is unique across all servers in a given replication cluster.
GTID = Server_uuid:transaction_id
# Create bucket and update SQL file
gsutil mb gs://${DEVSHELL_PROJECT_ID}
gsutil cp {SQL_FILE} gs://${DEVSHELL_PROJECT_ID}
# Create instance if not available
gcloud sql instances create {INSTANCE_NAME}
# Import SQL file
gcloud sql import sql {INSTANCE_NAME} gs://${DEVSHELL_PROJECT_ID}/${SQL_FILE}
ERROR: (gcloud.sql.import.csv) HTTPError 403: The service account does not have the required permissions for the bucket.
https://cloud.google.com/sql/docs/mysql/import-export/importing
# Update permissions ...
export sqluser=`gcloud sql instances describe rentals --format="value(serviceAccountEmailAddress)"`
gsutil acl ch -u $sqluser:W gs://${DEVSHELL_PROJECT_ID}
gsutil acl ch -u $sqluser:R gs://${DEVSHELL_PROJECT_ID}/{IMPORT_FILE}
# Import files ... (refer to above)
# Delete permissions ...
gsutil acl ch -d $sqluser gs://${DEVSHELL_PROJECT_ID}/{IMPORT_FILE}
gsutil acl ch -d $sqluser gs://${DEVSHELL_PROJECT_ID}
gcloud sql connect myinstances --user=root --quiet
https://cloud.google.com/sql/docs/mysql/sql-proxy
- Create compute instance with "sqlservice.admin" scope
gcloud compute instances create gce-db --zone asia-east2-b --machine-type f1-micro --scopes https://www.googleapis.com/auth/sqlservice.admin
- Get the Cloud SQL connection name (connectionName) for later setup
gcloud sql instances describe my-instance --format "value(connectionName)"
- Login to remote instance
gcloud compute ssh gce-db --zone asia-east2-b
Execute the following on compute engine:
- Patch and install the instance
sudo apt-get update
sudo apt-get install mysql-client -y
- Deploy the Cloud SQL proxy
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
- Start the Cloud SQL proxy
./cloud_sql_proxy -instances={connectionName}=tcp:3306 &
- Connect to the Cloud SQL proxy via MySQL client (note client will connect to the proxy)
mysql -u root -p -h 127.0.0.1
Read Replicas:
gcloud sql instances describe {instance}
gcloud sql instances patch --enable-bin-log {instance}
gcloud sql instances create {instance}-replica-0 --master-instance-name={instance}
Failover Replicas:
External Replicas:
External Masters:
https://cloud.google.com/sql/docs/mysql/backup-recovery/restore
To perform a point-in-time recovery, your source instance must have Automate backups and Enable binary logging selected. In addition, your instance must have a backup that was taken before the event you want to recover from, as well as continuous binary logs from the time that backup was taken.
gcloud sql instances create myinstance --database-version POSTGRES_9_6 --zone asia-east2-a --cpu=2 --memory=7680MiB
Register application's IP address in the authorized networks.
- Create YAML file (refer to https://cloud.google.com/appengine/docs/standard/python/config/indexref)
indexes:
- Kind: Car
ancestor: none
properties:
- name: door
direction: asc
- name: yearOfLaunch
direction: desc
- Create index in gcloud
gcloud datastore indexes create index.yaml
Cloud Storage | Cloud SQL | Spanner | Datastore | BigTable | BigQuery | |
---|---|---|---|---|---|---|
Backup | Scheduled Job or on Demand | Avro or CSV to Cloud Storage | Self managed via scripts and schedule job | Google Cloud Dataproc | ||
Transfer | GSUtil for on prem. Storage Transfer Service for online. | Dataflow |
|
https://cloud.google.com/iam/docs/understanding-roles Role types There are three types of roles in Cloud IAM:
- Primitive roles, which include the Owner, Editor, and Viewer roles that existed prior to the introduction of Cloud IAM
- Predefined roles, which provide granular access for a specific service and are managed by GCP
- Custom roles, which provide granular access according to a user-specified list of permissions
List IAM details
gsutil iam get gs://$DEVSHELL_PROJECT_ID
Remove user access
gsutil iam ch -d allUser gs://$DEVSHELL_PROJECT_ID
**List iam details
gsutil iam get gs://$DEVSHELL_PROJECT_ID
Add user access
gsutil iam ch user:{user_act}:objectViewer gs://$DEVSHELL_PROJECT_ID
To list roles:
gsutil beta iam roles list
Create Service Account
gcloud iam service-accounts create storecore --display-name {service_ac_name}
gcloud iam service-accounts add-iam-policy-binding {service_ac_name@email} --member='serviceAccount:{service_ac_name@email}' --role='roles/editor'
gsutil iam ch serviceAccount:{service_ac_name@email}:objectCreator gs://${DEVSHELL_PROJECT_ID}_1
List ACL details
gsutil acl get gs://testbucket
Add user access
gsutil setacl public-read gs://$DEVSHELL_PROJECT_ID/hulk.png
Copy web content to bucket
curl http://{website} > {local staging file}
gsutil cp {local staging file} gs://{testbucket}
Grant access
gsutil iam ch user::objectViewer,objectCreator gs://testbucket/
Remove access
gsutil iam ch -d user::objectViewer,ObjectCreator gs://testbucket/
Enable CDN for bucket
gcloud compute backend-buckets create {bucket name i.e. static-bucket} --gcs-bucket-name $DEVSHELL_PROJECT_ID --enable-cdn
- Reserve IP address `gcloud compute addresses create {ip address name} --region {location i.e. asia-east2}
To verify
gcloud compute addresses list
Expand network (CIDR)
expand-ip-range {SUBNET_NAME} \
--prefix-length {New Range} \
--region {Region}
- No public IP address assigned
gcloud compute networks create {network_name} --subnet-mode=custom
- Enables VM instances with only internal (private) IP addresses (no external IP addresses) to reach the public IP addresses of Google APIs and services.
gcloud compute networks subnets create privatesubnet --network {network_name} --region us-central1 --range 10.0.0.0/24 --enable-private-ip-google-access --help
- VPC Service Controls improves your ability to mitigate the risk of data exfiltration from Google-managed services like Cloud Storage and BigQuery.
# Create network
gcloud compute networks create foonet --subnet-mode auto
# Enable SSH to VPC
gcloud compute firewall-rules create foonet-ssh --action allow --direction ingress --rules tcp:22 --network foonet
gcloud compute networks create foonet --subnet-mode custom
gcloud compute networks subnets create foonet-subnet-asia --network foonet --region asia-east2 --range 10.130.0.0/20
gcloud compute networks subnets list --filter '(foonet)'
# Create VM in subnet
gcloud compute instances create foo-vm --network foonet --subnet foonet-subnet-asia --machine-type n1-standard-1 --zone asia-east2-a
gcloud compute firewall-rules create <FIREWALL_NAME> --network management --allow tcp:22,tcp:3389,icmp
# Create 1st leg
gcloud compute networks peerings create peering-1-2 --network mynetwork --peer-network privatenet
# Create 2nd leg
gcloud compute networks peerings create peering-2-1 --network privatenet --peer-network mynetwork --auto-create-routes
# List network peerings
gcloud compute networks peerings list
# List routes
gcloud compute routes list --filter '(peering)'
gcloud compute routes list --filter '(peering-2-3)'
gcloud compute routes list --filter '(peering-2-1)'
# List subnets
gcloud compute networks subnets list --filter '(privatenet)'
gcloud compute networks subnets list --filter '(mynetwork)'
resources:
- name: the-first-vm
type: compute.v1.instance
properties:
zone: us-central1-f
machineType: https://www.googleapis.com/compute/v1/projects/{project}/zones/us-central1-f/machineTypes/f1-micro
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-9
networkInterfaces:
- network: https://www.googleapis.com/compute/v1/projects/{project}/global/networks/default
To execute gcloud deployment-manager deployments create onevm3 --config one-vm.yaml
Sample output:
The fingerprint of the deployment is hP3camY6tt4C0zWohgcCwQ==
Waiting for create [operation-1543237971364-57b91171648a1-9387d8c0-55eeebee]...done.
Create operation operation-1543237971364-57b91171648a1-9387d8c0-55eeebee completed successfully.
NAME TYPE STATE ERRORS INTENT
the-first-vm compute.v1.instance COMPLETED []
gcloud pubsub topic create my-topic
- Single region deployment
- Cloud CDN delivery
- Cloud load balancing such that SSL terminates at the edge
- Admin activity logs
- Turned on and cannot be modified
- Logs config and metadata operations
- Data access logs
- Turned off (except for BigQuery) - Update "Audit Logs" is "IAM & Admin"
- Data read / write operations
while read a; do gcloud compute disks delete $a --quiet; done <<<`gcloud compute disks list --uri | tail --lines
=+2`
If your project exceeds a particular quota while using a service, the platform will return an error.
In general, GCP will return a HTTP 429 error code if you're using HTTP/REST to access the service, or ResourceExhausted if you're using gRPC. How this error appears to you is service-dependent. https://cloud.google.com/docs/quota#quota_errors
If your app exceeds the free quota for a resource needed to initiate a request, such as the bandwidth quota or instance hours quota, users of the app will get a server error, such as a HTTP 503 error https://cloud.google.com/appengine/pricing
https://docs.microsoft.com/en-us/azure/devops/articles/security-validation-cicd-pipeline?view=vsts