Docker - BKJackson/BKJackson_Wiki GitHub Wiki

Docker Subpages:

Best guides

Official Docker Getting Started Guide - Covers 6 steps for scaling your app and deploying your app to production
How Docker Can Help You Become A More Effective Data Scientist
A Step Towards Reproducible Data Science : Docker for Data Science Workflows
Official Docker 101 Tutorial - May 5, 2020

Automation

How to Set up Automated Docker image builds
Build Docker Images with Github Actions

Docker Container Lifecycle

Conception:
BUILD an Image from a Dockerfile
COPY local files into the new image
ENTRYPOINT specifies a default command to run
CMD provides the default arguments when starting a container from the image

Birth:
RUN (create + start) a container

Reproduction:
COMMIT (persist changes) a container to a new image

Sleep:
KILL a running container

Wake:
START a stopped container

Stop:
STOP a running container

Death:
RM (delete) a stopped container

Extinction:
RMI a container image (delete image)
How To Remove Docker Containers, Images, Volumes, and Networks

Immortality:
Docker Tutorial 4: Exporting Container and Saving Image

Docker flow
Docker flow

Docker cheatsheet

Docker Cheat Sheet - Aug 9, 2016
Docker Commands — The Ultimate Cheat Sheet - Aug 21, 2018

The difference between RUN, CMD and ENTRYPOINT

In a nutshell

  • RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.
  • CMD sets default command and/or parameters, which will be executed only when you run a container without specifying a command. It can be overwritten (or ignored) if a command is supplied from command line when docker container runs. If Dockerfile has more than one CMD instruction, all but last CMD instructions are ignored.
  • ENTRYPOINT configures a container that will run as an executable. The difference between CMD and ENTRYPOINT is that the ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters. Docker RUN vs CMD vs ENTRYPOINT

Basic Docker commands

Get help for a docker command: docker <COMMAND> --help
List available images: docker images
Run a container: docker run <IMAGE>
Stop a container: docker stop <CONTAINER> Restart a container: docker restart <CONTAINER>
List Docker containers: docker ps
View container logs: docker logs <CONTAINER>
Inspect a docker container: docker inspect <CONTAINER>
SSH into a docker container: docker exec -it <CONTAINER> /bin/bash
Create a new image from a container's changes: docker commit [OPTIONS] <CONTAINER> [NEW_IMAGE_NAME[:TAG]]
Get info about existing Docker installation: docker info
Log in to a container as root: docker exec -it -u root <CONTAINER> /bin/bash

Copy source code to the Docker image in the Dockerfile

RUN mkdir -p /usr/src/app  
COPY . /usr/src/app 
WORKDIR /usr/src/app  

Dockerfile run command

The command to be running by default when the container is launched:

CMD ["ruby", "app.rb"]

Build an image from a specific docker file other than Dockerfile

docker build -f Dockerfile_viz -t myamazingdockerimage .

Save a docker image

docker save ubuntu > ubuntu_save.tar  

Export a docker container

docker export ubuntu > ubuntu_export.tar  

grep trick to grab info from ifconfig

$ ifconfig | grep "inet addr:"  

Store ID of a new running container in an environment variable

$ CONTAINER_ID=$(docker run -d -P jupyter/docker-desktop)  

Get password generated during runtime from docker logs

$ echo $(docker logs $CONTAINER_ID | sed -n 1p

A new password is generated by PWGen everytime a container is created. The password contains 12 characters with at least one capital letter and one number.

Get a container's external ssh port

$ docker port $CONTAINER_ID 22
49153

This is the external port that forwards to the ssh service running inside of the container as port 22. We can use this port later to connect to the machine where the docker daemon is running.

Run a file in a Docker container

--rm means to Automatically remove the container when it exits
Pulls image from library/python and gives it tag "3" the first time this is executed.
-v mounts current working directory to /src.

docker run --rm -v $(pwd):/src python:3 python /src/hello.py  

Commit with a name for DockerHub

The hamelsmu/ in front of the image name is the DockerHub username and makes pushing this to DockerHub easier.

docker commit container_1 hamelsmu/tutortial:v2  

Start using a json config file (see docker-compose --help for other options)

docker-compose -f docker-compose.json up

Run python interpreter in a python container

-it makes it interactive.

docker run --rm -it -v $(pwd):/src python:3 python  

Open a shell in the container

docker run --rm -it -v $(pwd):/src python:3 /bin/bash  

See any containers running in the background

docker container ls  

Pause processes in a container

docker pause <containerid>  

Execute an http request to a container URL

curl http://127.0.0.1:1313

Stop a running container

docker stop <CONTAINER NAME>  

Execute a command in a running Docker container

use docker ps to get the container name

docker exec -it <container name> <command>   

Create a network to allow sharing between multiple containers

docker network create multiple

# Create mysql container  
docker run --rm -d --net multiple --name mul_mysql -e MYSQL_ROOT_PASSWORD='root' mysql:5.6

# Create and open a terminal into a node.js container
docker run --rm -it --net multiple --name mul_node node:8 /bin/bash 

Create a network with docker compose

version: '3'
services: 
  redis: 
    container_name: exl_redis
    image: redis:3.2.12
    volumes:
      - ./redis:/data
    restart: always
  mysql: 
    container_name: exl_mysql
    image: mysql:5.6
    environment:
      MYSQL_ROOT_PASSWORD: root
    volumes:
      - ./mysql:/var/lib/mysql
    restart: always

YAML tips

Online YAML Parser

Networking modules available in the Python standard library

Networking and Interprocess Communication
Event Loop Methods

Asynchronous I/O

asyncio - A library to write concurrent code using the async/await syntax.

asyncio is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc.

asyncio is often a perfect fit for IO-bound and high-level structured network code.

Developing with asyncio - Tips

  • Debug Mode - several ways to enable debug mode
  • Concurrency Multithreading

Using asyncio.gather to process things in parallel
Shielding a task from cancellation

Read Dockerfile and hello file from subdirectories

mkdir -p dockerfiles context
mv Dockerfile dockerfiles && mv hello context
docker build --no-cache -t helloapp:v2 -f dockerfiles/Dockerfile context

Super Simple Docker setup

dockerfile:

FROM pip:7.1-apache  
COPY src/ /var/www/html  
EXPOSE 80  

index.php:

<?php  
$welcome = "Hello World 3";  
echo $welcome;  
?>  

Networking

chisel - A fast TCP tunnel over HTTP
Chisel is a fast TCP tunnel, transported over HTTP, secured via SSH. Single executable including both client and server. Written in Go (golang). Chisel is mainly useful for passing through firewalls, though it can also be used to provide a secure endpoint into your network. Chisel is very similar to crowbar though achieves much higher performance.

Docker & Python

Dockerized development environments for Python - Niall Byrne

Docker for Data Science

Docker for Data Scientists
How Docker Can Help You Become A More Effective Data Scientist
Building Python Data Science Container using Docker-The ssh is used to forward X11 and provide you encrypted data communication between the docker container and your local machine.

Docker Security

10 Docker Image Security Best Practices

Articles

Cool things to do with docker

Five cool (and impractical) things to do with Docker
Remote Linux desktop on your Docker VPS with SSH and VNC
Awesome Docker Github
Docker Desktop: Your Desktop over ssh running inside of a Docker container
Running GUI apps with Docker

Docker Tutorials and Training

A Beginner Friendly Intro to Docker Containers, VMs, and Docker - 3/4/2016
What is Docker? Docker containers explained - Sept. 6, 2018
2 ways to SSH into a Docker container
Training: Play with Docker - an online "classroom" for learning docker
Deploying and scaling applications with Docker, Swarm, and a tiny bit of Python magic - PyCon 2016
Data Science Workflows using Docker Containers - Tutorial video by Aly Sivji, See github: http://bit.ly/docker-for-data-science

Docker Pipelines and Related Tools

DevTools - Easy containerized development environments based on Docker
jupyter-repo2docker a tool to build, run, and push Docker images from source code repositories that run via a Jupyter server

Creating a CI/CD pipeline with Azure Pipelines and Google Kubernetes Engine - In this tutorial, you learn how to use Azure Pipelines (previously called Visual Studio Team Services), Google Kubernetes Engine (GKE), and Container Registry to create a continuous integration/continuous deployment (CI/CD) pipeline. The tutorial uses the ASP.NET MusicStore web application, which is based on ASP.NET Core.

How to deploy code into Docker containers automatically using Jenkins

dockviz - for visualizing docker containers, useful for showing dependencies
dockviz - github page

About Docker - Docker Intro Notes

Docker Containers

Containers make it possible to isolate applications into small, lightweight execution environments that share the operating system kernel. Each application and its dependencies use a partitioned segment of the operating system’s resources. Typically measured in megabytes, containers use far fewer resources than virtual machines and start up almost immediately. They can be packed far more densely on the same hardware and spun up and down en masse with far less effort and overhead than VMs.

Thus containers provide a highly efficient and highly granular mechanism for combining software components into the kinds of application and service stacks needed in a modern enterprise, and for keeping those software components updated and maintained.

Containers decouple applications from operating systems, which means that users can have a clean and minimal Linux operating system and run everything else in one or more isolated container.

Namespaces and cgroups

Namespaces deal with resource isolation for a single process, while cgroups manage resources for a group of processes.

Container Images

Container images are specifications for which software components a given Docker container would run and how. Docker’s container image tools allow a developer to build libraries of images, compose images together into new images, and launch the apps in them on local or remote infrastructure.

Container Orchestration

Docker also makes it easier to coordinate behaviors between containers, and thus build application stacks by hitching containers together. More advanced versions of these behaviors—what’s called container orchestration—are offered by third-party products, such as Kubernetes. But Docker provides the basics.

Because containers are lightweight and impose little overhead, it’s possible to launch many more of them on a given system. But containers can also be used to scale an application across clusters of systems, and to ramp services up or down to meet spikes in demand or to conserve resources.

Container Orchestration with Kubernetes

The most enterprise-grade versions of the tools for deployment, managing, and scaling containers are provided by way of third-party projects. Chief among them is Google’s Kubernetes, a system for automating how containers are deployed and scaled, but also how they’re connected together, load-balanced, and managed. Kubernetes also provides ways to create and re-use multi-container application definitions or “Helm charts,” so that complex app stacks can be built and managed on demand.

Docker also includes its own built-in orchestration system, swarm mode, which is still used for cases that are less demanding. That said, Kubernetes has become something of the default choice; in fact, Kubernetes is bundled with Docker Enterprise Edition.

Libcontainer

Although Docker was originally built atop LXC, eventually the Docker team created its own runtime, called libcontainer. Libcontainer not only provides a richer layer of services for containers, but also makes it easier for the Docker team to develop Docker container technology separately from Linux.

Container Portability

Container-based apps can be moved easily from on-prem systems to cloud environments or from developers’ laptops to servers, as long as the target system supports Docker and any of the third-party tools that might be in use with it, such as Kubernetes.

Docker Manifests

Manifests allow images for multiple operating systems to be packed side-by-side in the same image. Manifests are still considered experimental, but they hint at how containers might become a cross-platform application solution as well as a cross-environment one.

Microservices Model - Docker containers are NOT microservices

Most business applications consist of several separate components organized into a stack—a web server, a database, an in-memory cache. Containers make it possible to compose these pieces into a functional unit with easily changeable parts. Each piece is provided by a different container and can be maintained, updated, swapped out, and modified independently of the others.

This is essentially the microservices model of application design. By dividing application functionality into separate, self-contained services, the microservices model offers an antidote to slow traditional development processes and inflexible monolithic apps. Lightweight and portable containers make it easier to build and maintain microservices-based applications.

That doesn’t mean taking a given application and sticking it into a container will automatically create a microservice. A microservices application must be built according to a microservice design pattern, whether it is deployed in containers or not.

Containers and databases

The Microservice architecture pattern significantly impacts the relationship between the application and the database. Instead of sharing a single database schema with other services, each service has its own database schema. On the one hand, this approach is at odds with the idea of an enterprise-wide data model. Also, it often results in duplication of some data. However, having a database schema per service is essential if you want to benefit from microservices, because it ensures loose coupling. Each of the services has its own database. Moreover, a service can use a type of database that is best suited to its needs, the so-called polyglot persistence architecture. Source

⚠️ **GitHub.com Fallback** ⚠️