Docker Containers pentesting - whiteowl911/leveleffect GitHub Wiki

Skip to content

Rogue Security

Looking under the hood

Docker 101 for busy pentesters

February 23, 2020 roguesecurity

0

0

0

If you someone who has been assigned the task to audit and pentest docker container but you have no ideas about what is docker, then this blog is for you. Before pentesting any application, it is very important to understand the technologies underneath it. This is the first blog of 2 blogs on the docker pentesting series. In this blog, we will look into the basics of the docker, just enough to assist in our pentest. In the next blog, we will look into the pentesting of docker containers.

What is docker

Docker is an open-source application which performs the OS-level virtualization. Containerization helps the developers to quickly develop, run and ship the applications by separating the application code with the infrastructure. This has reduced dependencies between developers and system administrators. Docker is slowly becoming the de-facto technology for microservices development and it is standing far ahead of its rivals like RKT, Garden, LXC and Mesos.

Why docker?

Dependencies are the biggest headache when it comes to delivering the software. This is a very common scenario where the application works perfectly in the development environment, but when it is shipped, due to the missing dependencies, everything goes haywire. Containerization solves this problem along with providing the isolation for running the application. This makes the application portable and easy to ship to different machines just by using a text file (Dockerfile).

VM vs Docker

Virtual Machine vs Container

Virtual Machines runs on top of a hypervisor. They have their separate operating system. On the other hand, Docker uses the host operating system’s kernel. They are lightweight as compared to VMs because they don’t have an extra layer of virtualization in the form of the hypervisor. Containers don’t need separate OS environments. Any Linux OS can be used as host OS for running containers. Containers can run within the VMs. If we want to run docker containers on top of Windows or macOS, we need to first create a Linux VM and then run docker inside it because docker uses the features of Linux kernels for creation and isolation of containers.

Docker Architecture

Docker is written in go. It uses the features of the Linux kernel to achieve the containerization. It works on the client-server architecture. Below are the major components in docker:

  1. Docker Engine (Also called Docker Daemon)
  2. Docker Client
  3. Rest API
  4. Docker Registry

Docker Architecture

  1. Docker Engine: When we install docker on a Linux machine, we are starting a docker daemon which will interact with the Linux kernel via syscalls to create and manage containers. Docker daemon can be accessed via the following ways:
    • UNIX Socket: This is enabled by default. The docker daemon can be accessed via the UNIX socket available at /var/run/docker.sock. This is suitable when other processes running on the same machine needs to access the daemon. To access this socket, either root permission or docker group membership is required.
    • TCP Socket: This is suitable for the cases where the daemon needs to be accessed remotely.
    • File Descriptor socket: Docker daemon can be enabled to listen via file descriptor using dockerd -H fd://
  2. Docker Client: This is a CLI used to interact with the docker daemon. Under the hood, the client makes the REST API calls for creating and managing docker containers.
  3. REST API: Docker engine exposes various REST APIs. These APIs are used by docker client to interact with Docker daemon and perform various operations like creating and managing containers. The list of exposed Docker Engine APIs (v1.40) can be found here. By default, this REST API is accessible on the port 2375 (HTTP) and 2376 (HTTPS). These API endpoints are accessible locally on the machine where the docker daemon is running. These APIs can also be exposed over the network to manage the containers from different machines. By default, no authentication is required for accessing the endpoints. Exposing it over a network can become a security concert which we will discuss in the next blog.
  4. Docker Registry: It is used to store and distribute docker images. We use docker pull or docker run to pull the images from the registry. The image registry can be public (e.g. docker hub) or private (e.g. Docker Trusted Registry, JFrog Bintray, Gitlab)

Underlying technology

Docker engine or **dockerd** is a wrapper around **containerd** which internally uses **runc**.

containerd is an industry-standard container runtime which manages the complete container lifecycle. dockerd is just a wrapper around containerd. When we start the docker service on the host machine, the docker daemon internally starts containerd service. This can be verified by checking the status of running docker service. The containerd socket is located at /var/run/docker/containerd/containerd.sock. It is possible to spawn up the container by using containerd (without docker) using the CLI like ctr

containerd

containerd service started with dockerd

containerd internally uses runc, which is an OCI compliant low-level container runtime (similar to rkt). runc is a command-line interface (CLI) interacts with the Linux kernel via syscall to spawn up the container.

Image vs Containers

An image is a complete package which is required to run an application. It is the combination of

  • application code
  • runtime
  • system tools
  • libraries
  • settings and configurations

A container is a running instance of an image. Docker CLI is used to create and manage the containers.

Building a docker image and running container

Create a Dockerfile with the instructions for creating the container

Sample Dockerfile FROM alpine:latest # each RUN instruction creates the new image layer # it is recommended to chain the RUN instruction in Dockerfile RUN: RUN apt-get update && apt-get install -y \ git \ python \ vim CMD ["echo", "Hello World"] # COPY a local file (host.txt) to the container COPY host.txt /src/container.txt ENV MYSQL_ROOT_HOST=mysql-db

1

2

3

4

5

6

7

8

9

10

11

12

13

Sample Dockerfile

FROM alpine:latest

each RUN instruction creates the new image layer

it is recommended to chain the RUN instruction in Dockerfile

RUN:

  RUN apt-get update && apt-get install -y \

git \

python \

vim

CMD ["echo", "Hello World"]

COPY a local file (host.txt) to the container

COPY host.txt /src/container.txt

ENV MYSQL_ROOT_HOST=mysql-db

Build the docker image using the Dockerfile and tag (-t) it. Here the Dockerfile is located in the current directory

docker build -t roguesecurity/test .

We can use docker-compose CLI to run multiple commands. The frequently used docker CLI and docker-compose CLI commands can be found here.

This blog was a very quick introduction to docker. In the coming blog, we will look into the security issues associated with docker deployments and create a checklist for pentesting docker containers. I hope this article was informative. Subscribe to the mailing list (on the right sidebar) or follow me on Twitter/LinkedIn to get an update on my future blogs. Feel free to post your comments and feedback.

containers containers, docker

roguesecurity

The author is a security enthusiast with interest in web application security, cloud-native application development and Kubernetes.

Post navigation

Docker commands cheatsheet

Attacking and Defending Kubernetes Cluster

Leave a Reply

Your email address will not be published.

Comment

Name

Email

Website

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search for:

Recent Posts

Archives

Categories

Subscribe

Subscribe to the mailing list

Copyright © 2022 Rogue Security Theme: Flash by ThemeGrill. Proudly powered by WordPress