gcp sdk cli - ghdrako/doc_snipets GitHub Wiki

gcloud init
gcloud info

Config file:
C:\Users\username\AppData\Roaming\gcloud\configurations
~/.config/gcloud/configurations

Project

gcloud projects list
gcloud config configurations list
gcloud config configurations create [NAME]
gcloud config configurations activate [NAME]
or
set CLOUDSDK_ACTIVE_CONFIG_NAME=dev
or
gcloud compute instances list --configuration=dev
gcloud config set project PROJECT
gcloud config set core/account [email protected]
gcloud config set compute/zone europe-west3-c
gcloud config set compute/region europe-west3
gcloud config unset project
gcloud auth list
gcloud config set account `ACCOUNT`

Service account

gcloud auth activate-service-account serves the same function as gcloud auth login but uses a service account rather than Google user credentials.

gcloud auth activate-service-account [email protected] --key-file=/path/key.json --project=PROJECT_ID

Multiple accounts

The syntax to switch the active account takes the following format.

gcloud config set account <accountemailaddress>

Named configurations

$ gcloud config configurations create my-project1-config
$ gcloud config configurations activate my-project1-config
$ gcloud auth login  # or activate-service-account
$ gcloud config set project project1  # and any other configuration you need to do
$ 
$ gcloud config configurations create my-project2-config
$ gcloud config configurations activate my-project2-config
$ gcloud auth login  # or activate-service-account
$ gcloud config set project project2  # and any other configuration you need to do
$
$ CLOUDSDK_ACTIVE_CONFIG_NAME=my-project1-config gcloud ...

VM

  • create a VM
gcloud compute instances create <instance name> --zone <zone name>
  • connect to the instance
gcloud compute ssh <instance name> --zone <zone name>
  • add a disk to the VM instance
gcloud compute disks create <disk name> --zone <zone name>
  • attach the disk to our instance
gcloud compute instances attach-disk <instance name> attach-disk <disk name> --zone <zone>
gcloud --project something-staging-2587 compute ssh my_vm
gcloud compute instances list
gcloud compute instances list --configuration=dev
gcloud compute addresses list
gcloud compute instances describe VM_NAME
gcloud compute instances start VM_NAME
gcloud compute instances stop VM_NAME
gcloud compute instances add-labels VM_NAME \
    --label=KEY=VALUE
gcloud compute ssh VM_NAME
gcloud compute scp LOCAL_FILE_PATH VM_NAME:REMOTE_DIRECTORY #  copy files to a VM
gcloud compute scp VM_NAME:REMOTE_DIRECTORY LOCAL_FILE_PATH #  copy files from a VM
gcloud compute project-info add-metadata VM_NAME \
    --metadata=KEY=VALUE,[KEY=VALUE]
gcloud compute instances add-metadata VM_NAME \
    --metadata=KEY=VALUE,[KEY=VALUE]
gcloud compute --help
gcloud compute instances --help
gcloud compute instances create --help

GKE

gcloud --project something-staging-2587 container clusters list
gcloud container clusters describe [CLUSTER-NAME] \ --zone=[ZONE] | --region=[REGION] \ --format="get(privateClusterConfig.publicEndpoint)" # The correct (better) command to obtain the public endpoint: 

Google Storage gsutil

gsutil <action> gs://<bucket name>/<resource name>

  • Create bucket
gsutil mb -l <zone name> gs://<bucket name
  • Copy file gsutil cp Upload and Download File/Object and
gsutil cp <file to copy> gs://<bucket name>
gsutil cp local-location/filename gs://bucketname/
gsutil cp -r folder-name gs://bucketname/
gsutil -m cp -r folder-name gs://bucketname
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp ./localbigfile gs://your-bucket

gsutil cp gs://bucketname/filename local-location       # download file from GS to local
gsutil cp -r gs://bucketname/folder-name local-location # download folder from GS
gsutil -m cp -r gs://bucketname/folder-name local-location # download large number of files which performs a parallel (multi-threaded/multi-processing) copy
  • Copy between two Google Cloud Storage buckets
gsutil cp -r gs://SourceBucketName/example.txt gs://DestinationBucketName
gsutil cp folder-name gs://bucket-name   # Create a new folder using gsutil cp
sudo gsutil cp path/filename gs://bucket_name # 

Copy file from URL to Google Cloud Storage

curl -L file-url | gsutil cp - gs://bucket-name/filename # from url to bucket
  • Coping between GS and VM SSH to your instance and run the following command.
sudo gsutil cp path/filename gs://bucket_name
  • Coping between S3 nad GS
# S3 access credentials in a .boto file in your working directory.
[Credentials]
aws_access_key_id = ACCESS_KEY_ID
aws_secret_access_key = SECRET_ACCESS_KEY
gsutil cp s3://bucket-name/filename gs://bucket-name
gsutil cp gs://bucket-name/filename s3://bucket-name     # from GCS to S3
  • List bucket
gsutil ls gs://<bucket name>
  • Remove bucket
gsutil rm gs://<bucket name>/<object name>

BigQuery bq

  • create dataset
bq --location=<location> mk --dataset <dataset name>
bq --location=US mk --dataset newdataset

It is good practice to configure the default table's expiration for the default tables of your datasets, the expiration time for your tables, and the partition expiration for your partitioned tables.

  • create a new empty table
bq mk -t <dataset name>.<table name> <schema>
bq mk -t newdataset.newdataset \
examname:STRING,result:STRING
  • run queries
bq query '<SQL query>'

bq query "#standardSql SELECT word, corpus, COUNT(word) FROM
\'bigquery-public-data.samples.shakespeare\'
WHERE word LIKE '%beloved%' GROUP BY word, corpus"

If we want to connect to a project that isn't set as our default we must specify

<projects>.<dataset>.<table>

Bigtable cbt

we must install this component using Cloud Shell and the gcloud command.

  • Install
gcloud components install cbt

or

sudo apt-get install google-cloud-sdk-cbt
  • creating a new instance in Bigtable called myfirstinstance
cbt createinstance <instance id> <display name> <cluster id>
<zone> <number of nodes> <storage type>

cbt createinstance instance001 myfirstinstance cluster001
europe-west2-a 3 SSD

You can also use gcloud commands to create Bigtable instances. gcloud lets us create production or development environments, where developments have limited performance and no SLA. If you need to create a development Bigtable instance, then use gcloud

⚠️ **GitHub.com Fallback** ⚠️