DevOps - romitagl/kgraph GitHub Wiki
- Best practices for writing Dockerfiles:
- Republish images (also in Kubernetes): https://github.com/containers/skopeo
skopeo is a command line utility that performs various operations on container images and image repositories, does not require the user to be running as root to do most of its operations, does not require a daemon to be running to perform its operations.
- run as non root user:
sudo groupadd docker; sudo usermod -aG docker $USER; newgrp docker
- remove unused data:
docker system prune -a
- remove all unused local volumes:
docker volume prune -f
- remove dangling images:
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
- remove all Docker containers at once:
docker rm -f $(docker ps -a -q)
- view image layers:
docker image history <imagename>
, or:docker image history --no-trunc <imagename>
- run as daemon and restart automatically:
docker run -d --restart=always -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro telegraf
- create a cron job to cleanup docker images not used in the last hour:
(crontab -l ; echo '*/10 * * * * docker system prune -af --filter "until=1h"') | crontab
- Change where Docker stores its files including volumes by changing one of its startup parameters called --data-root, example:
sudo vi /etc/systemd/system/docker.service
# string to edit
ExecStart=/usr/bin/dockerd --data-root /new_location/ -H fd:// $DOCKER_OPTS
# restart the Daemon
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl status docker
- Configure Terraform in Azure Cloud Shell with Bash
- Terraform examples to provision Kubernetes clusters with all the major Cloud providers: https://github.com/ElliotG/coder-oss-tf
- A set of modern Grafana dashboards for Kubernetes: https://medium.com/@dotdc/a-set-of-modern-grafana-dashboards-for-kubernetes-4b989c72a4b2
- Best practices for segmentation of the corporate network: Network-segmentation-cheat-sheet
- IPv4 Subnetting:
- Server Responses Cheat Sheet:
- Grafana k6: A modern load testing tool built for developer happiness
- Gatling: Modern Load Testing as Code. GitHub: https://github.com/gatling/gatling
- IOP/s = Input or Output operations per second
- Throughput = How many MB/s can you read/write continuously
What About Block and File Sizes?
The goal when benchmarking is really to see if the storage system has been optimized to suit your intended use case. For example a system optimized for Writing/Reading lots of small files (i.e. documents, logs) will benefit from a smaller block size but Writing/Reading large files (i.e. videos, large backups) benefit from a larger block size. Another example would be a database server which may have a large database but due to the way the transactions are committed it may be better to have a smaller block size.
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=4G --readwrite=readwrite --ramp_time=4
- query:
curl --request POST -v -d '{"query": "query {profile {id name} }"}' http://localhost:8080/v1/graphql
- mutation with variables:
curl -H "X-Hasura-Admin-Secret:hasura-admin-secret" --data '{ "query" : "mutation Login($username: String!) { Login(loginParams: {password: $username, username: $username}) { token } }", "variables": { "username" : "test1" }}' http://127.0.0.1:8080/v1/graphql
- query for errors:
curl -g 'localhost:9090/api/v1/query?' --data-urlencode 'query=metric{result="error"}' | jq
- pushgateway:
echo "metric 1.0" | curl --data-binary @- http://localhost:9091/metrics/job/job_name
- run on Docker: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_c_customized_image
- nodes in the cluster:
curl -X GET "localhost:9200/_cat/nodes?v=true&pretty"
- list the indices:
curl http://127.0.0.1:9200/_cat/indices?v=true&s=index
- get indices and sort by the 7th column:
curl http://127.0.0.1:9200/_cat/indices | sort -s -n -k 7,7
- get only some columns:
curl http://127.0.0.1:9200/_cat/indices?h=index,store.size
- get indices and sort by the 7th column:
- get the status of an index:
curl -XGET "localhost:9200/index?pretty=true"
- retrieve index data:
curl -XGET http://localhost:9200/index/_search?pretty=true&q=*:*
curl -XGET 'http://localhost:9200/_cluster/health?level=indices'
curl http://localhost:9200/_aliases
- cluster settings:
http://localhost:9200/_cluster/settings?pretty&include_defaults
- node stats:
http://localhost:9200/_nodes/stats?metric=adaptive_selection,breaker,discovery,fs,http,indices,jvm,os,process,thread_pool,transport
- add a single document:
curl -X POST "localhost:9200/logs-my_app-default/_doc?pretty" -H 'Content-Type: application/json' -d'
{
"@timestamp": "2099-05-06T16:21:15.000Z",
"event": {
"original": "192.0.2.42 - - [06/May/2099:16:21:15 +0000] \"GET /images/bg.jpg HTTP/1.0\" 200 24736"
}
}
'
curl -u user:password -G 'https://host:port/query?db=telegraf&pretty=true' --data-urlencode 'q=show measurements'
AI-powered Search & Chat for AWS Documentation: https://www.awsdocsgpt.com
To configure AWS CLI, type aws configure
.
# example
AWS Access Key ID [None]: your-key-id
AWS Secret Access Key [None]: your-secret-access-key
Default region name [None]: your-region
Default output format [None]: ENTER
Alternatively, export the environment variables:
export AWS_ACCESS_KEY_ID='your-key-id'
export AWS_SECRET_ACCESS_KEY='your-secret-access-key'
- s3fs: s3fs allows Linux, macOS, and FreeBSD to mount an S3 bucket via FUSE. s3fs preserves the native object format for files, allowing use of other tools like AWS CLI
Note: Examples using a min.io server, --endpoint-url
can be removed for standard S3.
-
To list your buckets:
aws --endpoint-url https://play.min.io:9000 s3 ls
-
To list contents inside bucket:
aws --endpoint-url https://play.min.io:9000 s3 ls --human-readable --summarize s3://mybucket
-
To make a bucket:
aws --endpoint-url https://play.min.io:9000 s3 mb s3://mybucket
-
To add an object to a bucket:
aws --endpoint-url https://play.min.io:9000 s3 cp simplejson-3.3.0.tar.gz s3://mybucket
-
To delete an object from a bucket:
aws --endpoint-url https://play.min.io:9000 s3 rm s3://mybucket/delete.json
oraws s3api delete-objects --bucket mybucket --delete file://delete.json
-
To delete recursively a folder from a bucket:
aws --endpoint-url https://play.min.io:9000 s3 rm --recursive s3://mybucket/myfolder
-
To delete multiple objects from a bucket:
aws s3api delete-objects --bucket mybucket --delete '{"Objects":[{"Key":"worksheet.xlsx"},{"Key":"purple.gif"}]}'
-
To remove a bucket:
aws --endpoint-url https://play.min.io:9000 s3 rb s3://mybucket
- Process all the input .py files in the folder and execute
command
to produce .zip output files:make run_target
SRCS := $(wildcard *.py)
OBJS = $(SRCS:.py=.zip)
./%.zip : ./%.py
command --input $< --output $@
.PHONY: run_target
run_target:: $(OBJS)
- CNOE: CNOE (pronounced Kuh.noo, and short for Cloud Native Operational Excellence) is an open source organization comprised of enterprise companies, sharing best practices on building, testing, and deploying internal developer platforms (IDPs) at enterprise scale.
- InfraCopilot: Infra-as-Code Editor
- OpenObserve: OpenObserve is a cloud native observability platform built specifically for logs, metrics, traces and analytics designed to work at petabyte scale.