Docker - tomasz-ucll/UCLL-ITLandscape_ThomasL GitHub Wiki

So, what's a Docker?

To keep it simple, it's a program (kind of a software suite, even) that allows you to package everything a project needs to run into one handy bundle called a container, which is then easily shared with colleagues and customers, allowing them to run your software with minimal kerfuffle.

Docker was born as a solution to the "it works on my device" problem, where a developer with a bunch of dependencies already set up from previous projects is able to get their software to run with minimal additional setup. When other users get their hands on the software, it's unlikely they have the exact same software as the original developer, causing it to have compilation failures, runtime errors, or other issues due to missing dependencies.

Docker allows the developer to bypass "works on my device" by basically shipping out the whole device, with all necessary dependencies and quirks, without it requiring a catastrophic amount of bandwidth or storage space.

You don't see it, but Docker is likely to be all around you - it's strongly supported by multiple companies, such as Microsoft and IBM, and lists ING, Spotify, Pinterest, and the New York Times as companies that use Docker.

Sounds exciting, huh? If you want in on the fun, good news! I'm gonna tell you how!

So, how do I get Docker?

To get Docker, visit the official website at https://www.docker.com/. Near the top, you should already have the option to download Docker Desktop - that's the client you'll be using!

image

Hover over the download button and choose the version corresponding to your operating system. I'm using Windows AMD64, so we'll have to go with that. It's okay if you don't guess correctly what version you're using. Your OS should just tell you it won't be able to run, and you can just try a different version.

If you're running a different OS, the process might look a little different, but here's how it goes for me.

Once the installer is downloaded and you open it, you should get a screen like this on Windows.

image

The defaults will do fine for this step. Windows Containers potentially cause security problems in your device, so it's best to leave them off if you're a normal user. Click OK to proceed.

image

Docker will start installing the things it needs to get to work. Give it some time to do so. In the meantime, get a cup of coffee, tea, or your preferred hot beverage, or spend the time writing documentation about it on your second screen if you're me. :^)

image

Eventually it'll finish installing. Go ahead and press Close.

Now you can open Docker Desktop. Agree to the subscription term (if you aren't working for a big corporation it's totally irrelevant to you) and wait a moment.

image

Log in or sign up as a personal user (assuming you are one).

image

Do this survey, or skip it, depending on how you're feeling.

image

Welcome to Docker! We'll go over what else you can do here in the next chapter.

So, what do I do now that I have Docker?

I will give you a disclaimer: the Docker learning center you'll find in the Docker Desktop app is a way better source of information than me, but I'll still tell you a bit about what you can do because I'm legally obliged to here to support you.

Check in the bottom right. If you see "Update", you should do that first and foremost, just to be up-to-date. After that, look a little to the left from there.

Open the Terminal.

image

This is where you can run your Docker commands in a quick and compact fashion. Plus, it makes you feel cooler compared to using the UI!

image

Go ahead and enable it. Should look like this. This is where you'll be running your Docker-related commands. How about I tell you about a few? All of these are sampled from the Docker Cheatsheet you can find at https://docs.docker.com/get-started/docker_cheatsheet.pdf, but I'll get the important ones for you.

To get started, you need an image, so here's some commands to manage those.

docker build -t <image_name> . lets you build a Docker image using a Dockerfile in the same folder you're in, and gives it the name you specified.

docker rmi <image_name> lets you remove an image with the given name, if you don't need it anymore. To remove all images you aren't using, you can run docker image prune.

When you have an image, you can run it in a container! Here's some instructions for those.

docker run --name <container_name> <image_name> is the most basic instruction for getting a container up and running. The container name is typically optional, as Docker can generate a name for you, but I recommend naming them for future management.

There are, of course, some options here too. The -p flag allows you to expose ports (requiring you to give it a host port and a container post), and the -d flag lets you run a container in the background, in case you don't need it open right away.

Here's an example of a container being ran, exposing port 8080 on your own computer and connecting it to port 3000 in the container, in the background, using the name "spring-onion", and using image "celery":

docker run -d -p 8080:3000 spring-onion celery

Exciting! And that's not even all of it!

When you have some containers up and running and you want to check them via the terminal, that's possible too! Just run docker ps to get all the active and running containers, and you can add the --all flag to get running and stopped containers!

You can start a stopped container with docker start <container_name> (you can also use the container-id, which i believe you get via running docker ps), and you can stop a running container with docker stop <container_name>! (yes, container-id works here too!)

And when you get enough of a container, you can run docker rm <container_name> to remove it!

I believe that's all the important ones. But if you've been an attentive reader, you might've noticed a weird word near the start: "Dockerfile". What's all that about? Good news! That's the next thing we're gonna talk about!

So, what's a Dockerfile?

That's not referring to a person who really, really likes Docker! (That would be pronounced differently.)

The Dockerfile is a file containing instructions to run for when you decide to compile your code into a Docker image (images are used to run containers). Have you ever worked with a .bat file? It's similar to that - you're passing on instructions that will be used in the Terminal (though for Dockerfiles it'll run in the Docker terminal, not your usual shell, so the instructions require a slightly different language and syntax).

Here's a look at a Dockerfile, this one having been provided by Docker in the "How do I run a container?" learning center tutorial, handily broken down into bite-sized pieces for you:

FROM node:18-alpine This container will be based off a version of node.js, a Javascript-based platform for server-side and networking applications. This sets up the backend for us.

WORKDIR /app The equivalent of "mkdir app" and "cd app" in succession, this states we wish to run everything into the directory "app" inside of the folder we're running this file from. "app" will hold most of the code we're running.

COPY package*.json ./ We're copying all the files containing "package" in their name and the .json format into "app" from the node base image we declared earlier. This will include a list of dependencies we'll need later.

COPY ./src ./src COPY ./public ./public This copies over more code we'll need to run the program from the node base image.

RUN npm install \ && npm install -g serve \ && npm run build \ && rm -fr node_modules This calls Node Package Manager to:

  1. install all necessary dependencies for the app
  2. install "serve", which will be necessary to run the app and make it properly visible on our localhost
  3. build the app, converting all the code we've just written into beautiful binary that our computer knows how to interpret
  4. remove the entire node-modules directory by force, since it tends to be an extremely large folder with all the dependencies we just needed

EXPOSE 3000 Expose gate 3000, allowing us to actually access our folder by typing "localhost:3000" into a browser of our choice.

CMD [ "serve", "-s", "build" ] We've finally done all the pre-amble! Finally, we tell the terminal to serve the app with the -s flag and build the app one last time. With this, we've finally got the app up and running!

A lot of work sure goes into running a silly little test app, huh?

Alright, one more thing to talk about and I'll let you go for the day. Let's talk about a handy little add-on called Docker Compose!

So, what's Docker Compose?

A container is a pretty handy thing to have. But it's just one section. When you write a program, you're typically jumping between forms, classes, objects, and whathaveyou. In the event you need to manage things throughout multiple containers, it can be a chore to manage all of them.

Docker Compose simplifies this by allowing you to manage multiple containers via just one .yaml file, taking a good deal of the workload off your back every time you want to make adjustments and send the data around. This streamlines collaboration, lets you make the program even more adaptable, and a bunch of other fancy benefits that would likely be more interesting to big teams as opposed to a singular simpleton like myself. But I promise it's cool!

I'm gonna do my best to explain a Compose file using material I'm reading from https://docs.docker.com/compose/gettingstarted/, so for an extended explanation, feel free to read up from there. I'm gonna try to explain it to you simple-stupid style.

Here's a sample for you:

services: web: build: . ports: - "8000:5000" redis: image: "redis:alpine"

Some of this might look familiar, if you've been an attentive reader!

Under web: you can find build: and ports: ... Don't they kind of look like the console commands we've seen earlier? build: points to where the Compose file will be able to find the corresponding Dockerfile for when the time comes to build the image and container! And ports: tell the Compose file what ports to open in order to map them to the container's ports! redis: is just providing an image on the Docker hub for Redis, a data platform for caching, vector search, and NoSQL databases. (This would make more sense if you knew what app the example was building!)

Here's an upgrade to the previous example:

services: web: build: . ports: - "8000:5000" develop: watch: - action: sync path: . target: /code redis: image: "redis:alpine"

All we've done here is add develop: watch: with a few variables, which is actually a really handy addition to make. If you're familiar with Live Server or perhaps nodemon on NPM, this is a similar concept!

What it does here is: watch: (when a file is changed) action: sync (update the image/container to use this new code) path: . (when anything changes within this folder) target: /code (put it in the "code" subdirectory)

Isn't that lovely?

Anyways, that'll be all. See you next time! I'm very tired already!

⚠️ **GitHub.com Fallback** ⚠️