Docker - OrdinBeta/IT-Landscape GitHub Wiki
- Docker
Docker is a tool that allows you to standerdize setups for a project. This prevents the common "it works on my machine" problem. It does this by allowing you to create containers that can run on any machine that has Docker installed. If you know how to use Docker, it will make your life way easier. Instead of having to worry if your dependencies are able to run on a specific machine, you only have to worry if your machine can run a Docker container. This is especially useful when working with different operating systems or when deploying applications to production.
It's similar to virtual machines, but more lightweight and efficient. Docker containers share the same OS kernel, which makes them faster to start and use less resources compared to traditional virtual machines. This also makes Docker more portable, since you won't need to move/install a full OS for your application to run.
A Dockerfile is a text file that contains all the commands to assemble an image. It defines the environment in which your application will run, including the base image, dependencies, and configuration. You can think of it as a recipe for creating a Docker image. When you build a Docker image from a Dockerfile, it will create a container that has everything your application needs to run. In essence, a Dockerfile is the file you'll need to transform your code into an image that can be run in a Docker container.
Docker Hub is a cloud-based repository where you can find and share Docker images. It's like GitHub for Docker images. You can use it to find pre-built images for popular applications, or you can upload your own images to share with others. This makes it easy to find and use Docker images for your projects. An Image I use alot is the node:latest image. This is a pre-built image that contains the latest version of Node.js, which is a JavaScript runtime that allows you to run JavaScript on the server side. You can use this image to run Node.js applications in a Docker container.
I'm going to follow the same kind of steps as the Docker 101 found on their website. This is a great resource to learn the basics of Docker. The section below will be a summary of the information found there.
First of We'll need to install Docker Desktop. This is the easiest way to get started with Docker.
Go to this link: https://www.docker.com/products/docker-desktop/ and click on the "Download Docker Desktop" button. Then select your operating system. For most users, this will be "Windows - AMD64".
Once you have downloaded the installer, Double-click it to run it. Don't forget to give it administrative privileges if you're on Windows.
Leave the default settings and click on "OK". It should start installing Docker Desktop. When it's done, you can click on "Close".
Once Docker is installed, you should run it once to set everything up. You can find Docker Desktop by searching for it in the (windows) start menu. Alternatively, you can see if it's installed by typing docker --version
into a terminal. I strongly recommend you at least set up Docker Desktop, even if you prefer using the command line.
When You run Docker Desktop, you should see a window where it asks you to accept the terms of service. You can accept or decline, but understand that your journey with the desktop interface ends here if you can't bring yourself to accept the TOS.
Next, you'll be asked to log in or create an account. You can skip this step, but I recommend you create an account so you can access the Docker Hub. Make sure to first select Personal, unless you are using Docker for work. However, if you are using Docker for work, I sincerely hope you don't need this wiki and already know the basics of Docker. Anyway, you only need to enter your email address. After clicking on "Continue", a web page will open on your default browser where you'll have to enter your email address and password. If you have 2-factor authentication enabled, you'll also need to enter the code sent to your authenticator app. After that, you'll be redirected back to Docker Desktop, where you can continue using it.
You might get a short questionnaire, you can skip this if you want. I haven't included it in the screenshots below.
You'll land on the home page of Docker Desktop. You'll likely notice in the bottom right corner that there is an update available. Before we begin with the rest of the tutorial, I recommend you update Docker Desktop to the latest version. You can do this by clicking on the "Update" button in the bottom right corner. Then, you'll have to click on the "Update" button. Wait for a while for it to update. You may have to restart your computer for the update to finish. This time I got lucky and didn't have to restart my pc.
Now that you have Docker installed, you can start using it. First of, we need a project to make a docker container for. If you have no suitable candidate, you can use the project in the Docker 101 Tutorial as a reference. Run the this command in any CMD terminal: docker run -dp 80:80 docker/getting-started
. Wait for a bit, it may appear stuck in the begining. See docker getting started section below. Then open your browser and enter this URL: http://localhost/.
You may ask yourself: "Why should I use the CMD when the GUI is so much easier?". The answer is simple: You won't always have a GUI to your disposal. Imagine you're deploying your application to a (remote) server. You can't just open a GUI and click around. You will have to use the command line to deploy your application. That's why I will show you both the GUI and the command line methods to create a Docker image. The GUI is easier to use, but the command line works regardless of where you are.
A Dockerfile is essentially a recipe for Docker to use, so that it can transform your project into a Docker image. You only need a Dockerfile if your project files differ from an existing Docker image. If you want to use an existing Docker image, you can skip this step. However, if you want to create your own Docker image, you'll need a Dockerfile.
Creating a Dockerfile is super simple. It's writing the intstructions that are a bit difficult first.
Create a new file in the root of your project folder and name it Dockerfile
. This may sound weird, but make sure there is no file extension, just Dockerfile
. If you're using Windows, make sure to enable the option to show file extensions in the file explorer. Otherwise, you might accidentally create a file named Dockerfile.txt
. In the image below you can see all the other files have file extensions, but it really just is Dockerfile
, not Dockerfile.txt
.
A blank Dockerfile isn't very usefull. (It's actualy completely useless.) So let me explain some of the code I wrote in the above image. Below is the code from the screenshot above.
# This is a comment. It will be ignored by Docker.
# You can use comments to explain what your Dockerfile does.
FROM node:lts
WORKDIR /app
COPY ./Client/package*.json ./Client/
COPY ./Server/package*.json ./Server/
WORKDIR /app/Client
RUN npm i
COPY ./Client ./
RUN npm run build
WORKDIR /app/Server
RUN npm i
COPY ./Server ./
EXPOSE 3000
ENTRYPOINT [ "npm", "run", "start" ]
# The same code as above, but I tried removing as many `./` as possible.
FROM node:lts
WORKDIR /app
# I removed the `./` from the source paths, but you can leave it in if you want.
COPY Client/package*.json Client/
COPY Server/package*.json Server/
WORKDIR /app/Client
RUN npm i
# Can't remove the `./` here, because otherwise we are specifying nowhere for it to copy to.
# For this reason, you may want to use `./` everywhere anyways for consistency.
COPY Client ./
RUN npm run build
WORKDIR /app/Server
RUN npm i
COPY Server ./
EXPOSE 3000
ENTRYPOINT [ "npm", "run", "start" ]
First of, you need a base image. This is the image that your Docker image will be built on top of. You can use any image from Docker Hub, but for this example, we'll use the node:lts
image. This is a pre-built image that contains a long-term support version of Node.js, which is a JavaScript runtime that allows you to run JavaScript on the server side. You can use this image to run Node.js applications in a Docker container. The FROM
instruction is used to specify the base image for your Docker image.
Next up, we set the working directory for our Docker image. We do this with the WORKDIR
command. This is the directory where all your next commands will be run from. You can always change it by using WORKDIR
again. In this case, we set the working directory to /app
, which is where our application will be located in the Docker image. How do we know if the /app
directory exists in the base image? We don't, but the WORKDIR
command will create the directory if it doesn't exist. This is a good practice to ensure that your application has a consistent directory structure. Note, that you can't use relative paths here.
After that, we use the COPY
command. This command copies files and directories from your (host) machine to the Docker image. The COPY
command consists out of a source
and a destination
. In this case, we copy the package.json
and package-lock.json
files from the Client
and Server
directories to the same directories in the Docker image. This is necessary because we need to install the dependencies for our application before we can run it, and these files are essential for that. Note these things!
-
We don't have to specify the full path both on the host end and in the Docker image! The first path is relative to the Dockerfile, and the second path is relative to the working directory we set earlier. so
./Client/
is the same as/app/Client/
in the Docker image. The.
is very important here. It tells Docker that this is a relative path. However, you can also just leave out the./
entirely. This will also tell Docker that it's a relative path. -
The trailing
/
is important in the destination! Let's say we forgot the/
, what would happen? Well..., it would result in an error since we are trying to copy more than one file. Now why is that? Well, when you try to copy one file, and you forgot the trailing/
, Docker will assume that you want to copy the contents of your source file to your destination "file". So here, it would make a file namedClient
, again without an extenstion, and write the contents from your singular source to thatClient
file. It will even create the file first if it didn't exist yet. As you might see, this won't work if you specified multiple source files. Because, I mean, how would Docker know how to combine the contents of the files? Should it append the contents of the second file to the first file? And what would be the first file anyways, for example, incase you used a wildcard? It would be a mess. So, it's always encouraged to specify a directory as the destination, for which you will need to use a trailing/
. -
Since we don't want to repeat ourselves, we use the
*
wildcard!*
means "any characters, even 0". Sopackage*.json
will match bothpackage.json
andpackage-lock.json
. This is useful because we want to copy both files, but we don't want to write the same command twice. Note, if you're copying more than 1 file, the destionation must be a directory. In other words, the directory must end with a/
. -
The
COPY
command will create new directories if they don't already exist! As explained in theWORKDIR
section: we don't know if the/app
directory exists. This would be a problem, because if it didn't, we would be trying to copy files to a non-existing directory, right? If theWORKDIR
command just newly created the/app
directory, then we can be sure that the/Client
and/Server
directories also don't exist. In essence, we would have to use theWORKDIR
command again to make sure that the directories that we need exist. This however would make us repeat ourselves as we would need to write very similar commands one after the other. However, the makers/developers at Docker thought of this and made theCOPY
command create the directories if they don't exist. This allows us to keep our Dockerfile clean and short. -
The
COPY
command will overwrite files if they already exist! This is a very important thing to note. If you copy a file to a directory where a file with the same name already exists, it will overwrite that file. This is useful if you want to update a file in your Docker image, but it can also be dangerous if you're not careful. So make sure to double-check your paths and files before running theCOPY
command.
The RUN
command is pretty easy to understand. It runs a command in the Docker image. In this case, we run npm i
to install the dependencies for our application. The RUN
command will execute the command in the current working directory, set by the WORKDIR
command, which is /app/Client
in this case. This means that the command will be run in the /app/Client
directory in the Docker image. Later in the Dockerfile, we change the working directory to /app/Server
and run npm i
again to install the dependencies for the server application.
The EXPOSE
command isn't used to create an exposé about your application. It tells Docker that your application is sending/receiving network traffic on a specific port. Not just any traffic, traffic that is important for your host machine. In this case, we expose port 3000
, which is the port that our server application will be running on. Why do we need this? Remember, Docker containers are like virtual machines, which are like actual machines. On your computer, you need to plug in an ethernet cable to connect to the internet, right? (Let's ignore Wi-FI!) Well, Docker containers (and virtual machines) need to be connected to your host machine. For example when you host a website inside of your docker container. The EXPOSE
command essentially is like plugging in one end of the ethernet cable into your container. (The other end we'll connect later.) This way, your container can communicate with your host machine. Any incoming traffic from your host machine, Docker will put it on the port you specified, so that your application can receive it. And/or, any outgoing traffic to your host machine, your application (if you coded it wel) should put it on the specified port, where Docker will take it to the outside world. If you wan't to keep incomming and outgoing traffic separate, you can use the EXPOSE
command multiple times, each with a different port. This is useful if you have multiple applications running in the same Docker image, or if you want to expose multiple ports for your application. Note, you don't need to expose a port if all you want is for your container to have internet access. This should normally be handled automatically by Docker. The EXPOSE
command is only needed when you (on your host machine) want to access your application running inside the Docker container.
The ENTRYPOINT
command is used to specify the command that will be run when the Docker container is started. In this case, we run npm run start
, which will start our server application. The ENTRYPOINT
command is similar to the RUN
command, but it will only be executed when the container is started, not when the image is built. This means that you can use the ENTRYPOINT
command to specify the command that will be run when the container is started, and the RUN
command to install dependencies or perform other tasks that need to be done before the container is started. Note, you can only have one ENTRYPOINT
command in your Dockerfile. If you want to run multiple commands, you can use a shell script to run multiple commands. This is kinda what I did in my ENTRYPOINT
. ENTRYPOINT [ "npm", "run", "start" ]
runs a NodePackageManager script that has multiple commands. Also note, it's good practice to use the ENTRYPOINT
command at the end of your Dockerfile, so that all the previous commands have been executed before the container is started. Yes, you can put command after the ENTRYPOINT
command, but unless you know what you're doing, I wouldn't recommend it. The ENTRYPOINT
command is the last command that will be executed when the container is started, so it's a good place to put the command that will start your application.
The order of the commands in your Dockerfile
is important. Docker works with layers and caching. In essence, if we were to change a file that was copied in our last COPY
command, Docker will see this and be smart about it. Instead of rebuilding the entire image from scratch, it will rebuild from the last unchanged layer and rebuild the layers above it (aka: the commands below it). This is why I first copy the package json files, and then run npm i
, (kinda). This way, if I change a file in the Client
or Server
directories, Docker won't have to install all the node packages from the internet again. As you can see in my Dockerfile above, I should have put the RUN npm i
before any COPY -> Client/Server
commands.
**/node_modules
**/tmp
**/misc
#Client
**/Client/dist
# idk shit
**/.vscode
**/.angular
You may have seen in the screenshot above that I have a .dockerignore
file. If you know what a .gitignore
file is, then you know what a .dockerignore
file is. It's a file that tells Docker which files and directories to ignore when building the Docker image. This is useful if you have files that are not needed for your application to run, such as log files or temporary files. So when we used the COPY
command to copy all files in a directory, Docker will ignore files and directories specified in the .dockerignore
file.directory.
We now have a Dockerfile, but remember, that is only a recipe telling Docker how it should build our image. We now need to tell Docker to build an image using our Dockerfile. There are 2 ways of doing this: either via the GUI in visual studio code, or via the command line. I will show you both methods. But note, I'm going to use the GUI going forward. Before you try either methods, make sure that Docker Desktop is running!
In visual studio code, first, make sure you have the Container Tools extension installed and enabled. You can then right-click on your Dockerfile
and select "Build Image".
You will see a text popup on the top of your screen. It's asking you for the name of the image and, after the :
, the tag. The tag is optional, but it's a good practice to use it. The tag is used to specify the version of the image. If you don't specify a tag, Docker will use the latest
tag by default. You can also use a different name for your image, but it's a good practice to use the name of your project as the name of your image. After you have entered a name, just press enter and an image will be built for you. You can see the progress in the terminal at the bottom of your screen.
First of, open a terminal in the root of your project folder. Make sure the path is correct, otherwise Docker won't be able to find your Dockerfile
. Then run the following command:
docker build -t <your-image-name>:<your-tag> .
Replace <your-image-name>
with the name you want to give your image, and <your-tag>
with the tag you want to use. If you don't want to use a tag, you can just leave it out. The .
at the end is important, it tells Docker to look for the Dockerfile
in the current directory. The -t
flag is used to specify the name and tag of the image. If you don't specify a tag, Docker will use the latest
tag by default.
In visual studio code, open the Containers tab in the sidebar. It's icon looks like a shipping container. In this tab, you can see all you containers and images. Find the image you just made, and open it. The reason you have to open it like a folder is because you could have multiple versions of your image, each with a different tag. So find the tag you just made, or latest
if you didn't specify a tag, and right-click on it. Then select "Run" from the context menu. This will start a new container from your image. You can also select "Run Interactive" if you want to run the container in interactive mode, which allows you to run commands inside the container.
From there on out, you can manage your containers both in visual studio code and in the Docker Desktop GUI. You can stop, start, and remove containers, as well as view their logs and inspect their settings. You can also see the ports that are exposed by the container, which is useful if you want to access your application running inside the container from your host machine.
If you know the name of your image and the tag, then it's pretty easy. Just run one of the following commands in a terminal:
docker run -d <your-image-name>:<your-tag>
or
docker run -dp <host-port>:<container-port> <your-image-name>:<your-tag>
Replace <your-image-name>
with the name of your image, and <your-tag>
with the tag you want to use. If you don't want to use a tag, you can just leave it out. The -d
flag is used to run the container in detached mode, which means that it will run in the background. If you omit the -p
, you'll see the container running the terminal you're working in. Since you most likely want to keep working in your terminal, you need to add the -d
. The -p
flag is used to map a port on your host machine to a port in the container. This is useful if you want to access your application running inside the container from your host machine. The <host-port>
is the port on your host machine, and the <container-port>
is the port in the container that your application is running on. If you don't specify a port, Docker will assign a random port for you. Notice, how you don't need to write -d -p
. You can just write -dp
, which is the same as writing -d -p
. This is because Docker allows you to combine flags like this. You can also use multiple -p
flags to map multiple ports, for example: -dp 80:80 -p 3000:3000
.
You can list all your images by running the following command:
docker images
This will show you a list of all your images, including the name and tag. With this information, you can also run an image by using it's ID. You can find the ID in the first column of the output. To run an image by ID, use the following command:
docker run -d <image-id>
First, it's handy to get a list of all your (running) containers. You can do this by running the following command:
docker ps
This will show you a list of all your running containers, including the name and ID. If you want to see all your containers, including the stopped ones, you can run the following command:
docker ps -a
You might notice that your container has a random name, one that is different from the name of your image. This is because Docker automatically assigns a random name to your container when you run it. You can also specify a name for your container by using the --name
flag, like this:
docker run --name <your-container-name> -d <your-image-name>:<your-tag>
Anyways, you can stop a container by running the following command:
docker stop <container-ID>
And you can start a stopped container by running the following command:
docker start <container-ID>
You might notice that you don't need to specify an image you have made locally, for example with docker/getting-started
. This is because Docker will first check if you have the image locally, and if not, it will try to pull it from Docker Hub. It's because of this, that it may appear that Docker got stuck with an error message like Unable to find image 'docker/getting-started' locally
. This is however completely normal.
Now that you have learned how to control containers, you can pretty much forget everything about it. (Kinda, of course it's always good to know how to control your containers.) In this section, I will explain what Docker Compose is. It will make pretty much everything you learned about controlling containers obsolete.
Let's start with a problem you may have encountered or noticed. You have to remember all these docker run
commands, each with their own set of options like ports and names. If you only have one container, you may not have a problem with this, but what if you have multiple containers? You would have to remember all the commands and options for each container. This is where Docker Compose comes in. Docker Compose is a tool that allows you to define and run (multi-)container Docker applications. It allows you to define your application in a single file, called docker-compose.yml
, and then run it with a single command. This makes it easy to manage your containers and their dependencies.
Below is a pretty simple example of a docker-compose.yml
file. All it says to Docker is to build an image from the Dockerfile
in the current directory. And that the ports should be mapped to 3000:3000
. Even though the example below is pretty basic, I don't need to remember the settings needed for the docker run
command. They are just written in the docker-compose.yml
file. I can just run docker compose up
and it will start the container with the settings defined in the docker-compose.yml
file. This is especially useful if you have multiple containers that need to be started together.
Note, the Run All Services & Run Service buttons in the image above. If you don't want to use commands, you can use these to let visual studio code do all the work for you. IMPORTANT, in order to see the Run All Services button, you need to have an empty line above the services:
line. If you don't have an empty line, the button will not appear. I don't know why 💀💀💀💀💀💀.
First of all, make a new file called docker-compose.yml
in the root of your project folder. This is where you will define your application and its dependencies. You can use any text editor to create this file, but I recommend using Visual Studio Code or any other code editor that supports YAML syntax highlighting. Below I have added a more complex example of a docker-compose.yml
file that you can use to run a Drupal application with a MariaDB database and Adminer for database management. Also below that is a mongoDB and mongo-express example, which is not needed for Drupal, but is needed for my IT-Landscape assignment.
services:
drupal:
image: drupal
ports:
- "8080:80"
volumes:
- ./drupal/modules:/var/www/html/modules
- ./drupal/profiles:/var/www/html/profiles
- ./drupal/themes:/var/www/html/themes
- ./drupal/sites:/var/www/html/sites
# - /var/www/html/modules
# - /var/www/html/profiles
# - /var/www/html/themes
# - /var/www/html/sites
depends_on:
- db
restart: always
db:
image: mariadb:latest
environment:
MARIADB_DATABASE: drupal
MARIADB_USER: drupal
MARIADB_PASSWORD: drupal
MARIADB_ROOT_PASSWORD: test
volumes:
- ./mariadb/data:/var/lib/mysql
restart: always
adminer:
image: adminer
restart: always
ports:
- 8081:8080
depends_on:
- db
# Needed for IT-Landscape but not required for Drupal
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
mongo-express:
image: mongo-express
restart: always
ports:
- 8082:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: test
ME_CONFIG_MONGODB_URL: mongodb://root:test@mongo:27017/
ME_CONFIG_BASICAUTH: false
depends_on:
- mongo
Indentation is very important in YAML files. Make sure to use spaces instead of tabs, as YAML does not support tabs. The standard indentation is 2 spaces, but you can use any number of spaces as long as you are consistent. In the example above, I used 2 spaces for indentation.
You can add comments to your docker-compose.yml
file by using the #
character. Everything after the #
character on the same line will be ignored by Docker. This is useful for adding explanations or notes to your file. In the example above, I commented out some volumes
lines that I don't need right now, but I might need them later. This way, I can easily uncomment them if I need them again.
The services
keyword is used to define the different containers that make up your application. Each service is defined by a name (e.g., drupal
, db
, adminer
, etc.) and a set of configuration options. These options can include the Docker image to use, environment variables, ports to expose, and volumes to mount. You will (almost) always have a services
keyword in your docker-compose.yml
file. So why is it needed then? Because you can also have a volumes
keyword, which is used to define named volumes that can be shared between services. This is useful if you want to share data between containers, but it's not needed for most applications. You can also have a networks
keyword, which is used to define custom networks for your services. This is useful if you want to isolate your services from each other or if you want to connect them to an existing network.
Under the services
keyword, you can define your services. Each service is defined by a name, which is used to reference the service in other parts of the docker-compose.yml
file. For example, in the drupal
service, we reference to the db
service in the depends_on
section.
The image
keyword is used to specify the Docker image to use for the service. You can use any image from Docker Hub, or you can build your own image using a Dockerfile
. In the example above, we use the drupal
, mariadb:latest
, adminer
, mongo
, and mongo-express
images from Docker Hub. If you want to use a custom image, you can specify the path to the Dockerfile
in the build
section instead of the image
section.
The ports
keyword is used to expose ports on the host machine to the container. This is useful if you want to access your application running inside the container from your host machine. The format is <host-port>:<container-port>
, where <host-port>
is the port on your host machine and <container-port>
is the port in the container that your application is running on. In the example above, we expose port 8080
on the host machine to port 80
in the drupal
container, and port 8081
on the host machine to port 8080
in the adminer
container.
The volumes
keyword is used to mount directories from your host machine into the container. This is useful if you want to persist data or share files between your host machine and the container. The format is <host-directory>:<container-directory>
, where <host-directory>
is the directory on your host machine and <container-directory>
is the directory in the container where the files will be mounted. In the example above, we mount the ./drupal/modules
, ./drupal/profiles
, ./drupal/themes
, and ./drupal/sites
directories from the host machine into the /var/www/html/modules
, /var/www/html/profiles
, /var/www/html/themes
, and /var/www/html/sites
directories in the drupal
container, respectively. This allows us to easily modify the files on our host machine and have them reflected in the container.
Note, you can omit the <host-directory>
part, and just use <container-directory>
. This will create an anonymous named volume that is managed by Docker. Also note, when you mount a directory from your host into the container, the contents of the directory in the container will be hidden from the container itself. If your container has files in there that it needs, you first need to extract those files and put the in your host directory. Then you can use the bind mount.
The depends_on
keyword is used to specify dependencies between services. This is useful if you want to ensure that a service is started before another service. In the example above, we specify that the drupal
service depends on the db
service, which means that the db
service will be started before the drupal
service. This is important because the drupal
service needs the database to be available before it can start. It also makes your docker-compose.yml
file more readable, as it shows the relationships between the services.
The restart
keyword is used to specify the restart policy for the service. This is useful if you want to ensure that a service is always running, even if it crashes or is stopped. The always
policy means that the service will be restarted automatically if it crashes or is stopped. Other options are no
, on-failure
, and unless-stopped
. The no
policy means that the service will not be restarted, the on-failure
policy means that the service will be restarted only if it exits with a non-zero exit code, and the unless-stopped
policy means that the service will be restarted unless it is explicitly stopped.
The environment
keyword is used to specify environment variables for the service. This is useful if you want to configure the service with specific settings or credentials. In the example above, we specify the database name, user, password, and root password for the db
service. These environment variables will be used by the mariadb
image to configure the database.
You can control Docker Compose using the command line or the GUI in Visual Studio Code. In the command line, you can use the following commands:
# Start the services defined in the docker-compose.yml file
docker compose up
# Start the services in detached mode
docker compose up -d
# Stop the services defined in the docker-compose.yml file
docker compose down
Kubernetes are a tool that allows you to manage multiple Docker containers. It provides features like scaling, load balancing, and self-healing. Kubernetes is often used in production environments to manage large-scale applications. This however, is out of scope for this Wiki, but you can find more information in the Sources section below.