Creating A Basic Docker Container - VTAstrobotics/Documentation GitHub Wiki
- Prerequisites
- Overview
- Writing a Dockerfile
- Building an image
- Running a container
- Troubleshooting
- Tips
To understand the content on this page, you should know
- Basic Linux (
cd
,ls
, etc. (no pun intended)) - Basic ROS2 commands (
ros2 topic list
,ros2 run <args>
, 'colcon build`, etc.) - The content of our Software Setup page, especially the Why containers section
Our goal is simple: use ROS2 on a Jetson Nano to move a motor with a controller. Thus, this document should give you the necessary tools for understanding—and using—containerization on this team.
That being said, I intend to explain Developer Containers on a separate page.
A Dockerfile is just a description of your image. With a virtual machine, you may have the Ubuntu 22.04 disk image installed on your machine running in VMWare with some custom settings. Dockerfiles is how you can do the same for containers, but you have significantly more freedom.
If you wanted to write your own OS for your container, you could. Or, you can build off an existing one, such as Ubuntu 22.04 with the handy FROM ubuntu
line. This is pulling a prebuilt Docker image of Ubuntu from DockerHub (GitHub for Docker images).
As you can discover if you Google "ROS docker image," ROS has their own official docker images on DockerHub as well! Therefore, we can begin with the line FROM ros
.
With Ubuntu, we assumed that FROM ubuntu
would just pull the newest Ubuntu image (or, maybe we are confused and that's ok too). With ROS, it begs the question a little bit harder: which distro exactly are we pulling? How can we specify which one to use?
Tags specify which exact image we are pulling. Ubuntu does not only develop its newest OS! They may have 18.04, 20.04, 22.04, etc. all in development concurrently. Similarly, the ROS team may be supporting several different distributions. Furthermore, the ROS team supports different levels within their distributions such as core
, base
, and desktop
.
To choose which 'variant' we want, Docker uses tags
. Tags follow a colon in the image name. For example ubuntu:jammy
would refer to Ubuntu 22.04. When no tag is specified, the latest
tag is used by default. Thus, it is fairly dangerous to base your image on an image without specifying a tag.
For the purposes of this tutorial, we'll use ROS's humble-ros-base-jammy
tag, which (hopefully clearly) defines an image to have ROS's base
package running on Ubuntu 22.04 (Jammy Jellyfish).
Thus, the first line of your Dockerfile should be
FROM ros:humble-ros-base-jammy
At this point, you could use ros2 commands, but they'd be a bit dull as you have nothing to run them on! Still, before you jump in and start playing around, we have a few more basic setup things to do. First, we'd like to run the apt-get update
command to ensure that all our packages are refreshed on each build. To run any terminal command, you may simply use the RUN
keyword in your Dockerfile like:
RUN apt-get update
But hold on! You've forgotten sudo
! You can't run apt
commands unless you have superuser privileges. While that is true, by default in our Dockerfile we are root, so we actually can omit sudo
. This may cause various processes to throw warnings informing you that running things as root is dangerous. If you want, you can switch to a user that you create. We do this in our developer container's Dockerfile if you'd like an example.
For Python files in ROS2, we need to install an older version of Python's setuptools
. One day this may no longer be needed. In the meantime, use pip3
to install setuptools==58.2.0
.
But we don't have pip
, so install pip
before setuptools
. You know how to do that, but this one is a tiny bit tricky because it comes up with a few prompts. Thus, you'll need the -y
and --no-install-recommends
flags.
Your Dockerfile should now look like
FROM ros:humble-ros-base-jammy
RUN apt-get update
RUN apt-get install -y --no-install-recommends
RUN pip3 install setuptools==58.2.0
This one's easy! I can add our files! Just RUN git clone ...
! Well, yes, but also no. You could, but you'd also need to put your GH credentials in your Dockerfile. That's extremely insecure. You could mitigate this issue in a number of ways, perhaps by creating a dummy account, but you can't get around the fact that you are downloading your entire codebase each time you build your image. What a waste.
Instead, what if you just clone the repository on your host machine? Then you'd only need to clone it once, right? Then you could copy the files into your container—no credentials required! Well, besides the credentials stored locally on the host machine. But no public Dockerfile has them. Docker has a handy command for just this: COPY
. It works just like the cp
bash command. So, we might put COPY folder_name1/ ./folder_name2/
.
This looks for folder_name1
in the directory containing your Dockerfile and copies it to the current directory in your container. Since you haven't changed your container's directory yet, this would place the folder within your root
directory and name it folder_name2
.
If your workspace is testbot_ws
, and it's in the same dir as your Dockerfile, then you'd append the line
COPY testbot_ws/ ./testbot_ws/
to your Dockerfile. You could now build and run with 100% functionality. The remainder of this section is to assist you in some (major) quality-of-life improvements. You'll learn a lot more about writing "real" Dockerfiles, too.
If you stopped before this section, you'd have to run cd testbot_ws
followed by a colcon build
command every single time you launched your container. That'd suck, especially when the codebase is large enough to take several minutes each time.
If you're anything like me, you'd try
RUN cd testbot_ws
RUN colcon build --symlink-install
And you'd get a nice big error for that. See, each line in a Dockerfile is a layer
. Docker is quite smart, so it caches each layer as it builds your Dockerfile. Then, if you change line, say, 3 in your Dockerfile, it would not need to rerun lines 1 or 2. Unfortunately, it also runs each line in its own temporary Docker container. This means that your change of directory is worthless because you cd
in a separate container from colcon build
.
Docker has a simple command for this: WORKDIR
. Just write WORKDIR testbot_ws
(or the path to your directory in the image) in place of RUN cd ...
.
If you stopped before this section, you'd have to run . install/setup.bash
every single time you launch your container. That's annoying. You might try RUN . install/setup.bash
, but you'd find nothing changes for the same reason why RUN cd ...
did not work earlier.
We don't really need this to happen before we run our container, so we can actually tell Docker to run this specific command in our actual container. The way we do this is using the CMD
keyword. If you only needed to run that single command, you could place CMD . install/setup.bash
at the bottom of your Dockerfile. But you cannot use this keyword multiple times.
If you needed multiple things run in your actual container (hint hint: you will), you should just place these in a bash script and pass that to CMD. Mine's called startup_cmds
.
startup_cmds
might look like
#!/bin/bash
. install/setup.bash
/bin/bash
Note: the trailing
/bin/bash
is so you can still get an interactive terminal.
Your Dockerfile should now look like
FROM ros:humble-ros-base-jammy
RUN apt-get update
RUN apt-get install -y --no-install-recommends
RUN pip3 install setuptools==58.2.0
COPY testbot_ws/ ./testbot_ws/
WORKDIR testbot_ws
RUN colcon build --symlink-install
CMD startup_cmds
Note: a design choice one can make is to let Docker create the
startup_cmds
file (i.e.RUN /bin/echo -e "#!/bin/bash\n. install/setup.bash\n/bin/bash" > ~startup_cmds && chmod 777 ~startup_cmds
). I don't know if this is better.
Building the image actually walks through the Dockerfile, pulling Ubuntu, installing libraries, building your code, etc. Therefore, it may take several minutes. Luckily, it is quite easy. Just run docker build -t name:tag .
if your Dockerfile is in your current directory. You may choose name:tag
to be anything you'd like in the format provided (i.e. my_awesome_image:test
).
Note: if your Dockerfile is not named
Dockerfile
, you'll need to specify the name via-f filename
To run your image in a container, use the docker run
command. You'll need to specify the name and tag of your image, such as docker run my_awesome_image:test
.
Super useful flags
- To actually be able to interact within it via a terminal, add
-it
- To automatically remove the container after you're done using it (your image remains untouched), add
--rm
- To communicate over your host's network, add
--net=host
- To name your container, add
--name my_awesome_container
- To access hardware, add
--privileged
- To access USB ports, add
-v /dev/bus/usb:/dev/bus/usb
- To allow you to update your code within the container, volume mount your code as well! See Volume mount your code for more.
- If you get a message like
bash: ./startup: Permission denied
, just give yourself permissionsudo chmod u+x startup
. - If you need the container to respond to changes in the Dockerfile, rebuild.
-
colcon build --symlink-install
adds shortcuts tobuild/package_names
- according to this forum post you can downgrade
setuptools
to clear the easy_install error usingpip install setuptools==58.2.0
as this was last supported version.
What we did—by copying our code into the container—can be the right choice if you never modify those files, but for code? Definitely not.
For example, let's say you are in your container and you realize that a variable x
in your code needs to be negated. With this setup, you'd have to shut down your container, open the file, negate x
, and rebuild the container before you could test it again.
Or, worse still, you could make your modifications in the container, but it would only exist within that container. Your host would not have your changes. If you deleted your container (i.e. --rm
), those changes would be lost forever.
So both choices are bad.
Instead, we can volume mount this directory. Instead of making a copy of it all, volume mounting simply accesses the files where they exist on the host file system. This means you can edit these files directly within your container.
How to do it? Quite simply with the -v
flag given to docker run
. Simply specify your host path to your code (absolute path preferred), followed by a colon, followed by the path to where you want your code to go. This might look like -v /home/Documents/Astrobotics2022/testbot_ws:/testbot_ws
.
Some modifications to your Dockerfile are needed. First, remove your COPY
line. Next, use RUN
to create an empty directory for your codebase. It is important this is the exact same name as we will specify in the last part of our -v
flag.
Next, remove the colcon build
line. There is nothing to build in an empty directory. Finally, add colcon build --symlink-install
to the beginning of your bash script. Your Dockerfile might resemble
FROM ros:humble-ros-base-jammy
RUN apt-get update && apt-get install -y --no-install-recommends \
python3-pip \
&& pip3 install setuptools==58.2.0 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir testbot_ws
WORKDIR testbot_ws
RUN /bin/echo -e "#!/bin/bash\ncolcon build --symlink-install\n. install/setup.bash\n/bin/bash" > ~/startup_cmds && chmod 777 ~/startup_cmds
CMD ~/startup_cmds
To clean up dead images, run docker image prune
To clean up dead containers, run docker container prune
To clean up both, run docker system prune
You can also attach the -a
flag to any of these commands, but be careful, as you may have a lengthy rebuild next time.
To reduce the size of our Docker image, often people will remove the apt
cache (since it should be unnecessary in a container) using RUN rm -rf /var/lib/apt/lists/*
Since Docker creates intermediary containers at each step, what’s commonly done is to &&
this at the end of any apt
command. See this for more.