Dockerizing RTS Products - rallytac/pub GitHub Wiki
Dockerizing Our Stuff
Docker is the thing that all the cool kids are using these days. It is, frankly, pretty cool, but can be daunting to work with when you're getting started. (Trust us on this - we overthought things significantly when we started playing with Docker images and containers and such. But, it turns out, it can be pretty straightforward.)
Here's the down-low ... A Docker image is a thing, and a Docker container is another thing. They are not the same thing. An image is just a file holding a whole lot of other files (we could have said "containing a whole lot of files", but then we'd be back to "... so a Docker image and a Docker container are the same thing - no?). A container is an instance of an image.
Basic Definitions
Image: An image (as we said) is a file that holds a whole bunch of stuff. That stuff is simply things like executable binaries and files used for configuration of those executables. Furthermore, an image also holds some other goodies that Docker uses to run the image - goodies like directives to Docker for networking, memory, CPU, and so on. But we can ignore that for now (and the rest of this conversation frankly).
Container: A container is an image which has been given to Docker for execution. Basically, Docker takes the image you give it, creates a private directory structure on the machine where Docker is running (the host), extracts the contents of the image, and fires up the program it has been told to run. When you shut down that container its gone forever, meaning that when you do the same thing again, you're going to launch a brand new copy of the image as a new container.
Not a VM: Something else that should be cleared up is that a Docker container is NOT a VM (virtual machine); and the executables inside the container do not run in their own virtualized operating system. Rather, those executables are running on the host machine's operating system but sandboxed away from everything else running on that machine. Basically, software inside a running container thinks thinks its the only software running and the entire environment (memory, storage, CPU, network, and so on) belongs to it. Basically, that software has been lied to - and is quite happy with it. Ignorance is bliss - no?
Creating A Simple Image
Alright, it should be clear that the first thing we want to do is create an image. Once we have that, we'll run it as a container.
By example we're going to run a Rallypoint in a container. (We chose the RP for the example because its our smallest and - generally - simplest piece of software).
But before we do that, let's quickly review what an RP needs to run and how it runs.
First off, the RP (like most of our software) is largely a self-contained executable that has no external binary dependencies like 3rd-party libraries and so on. In fact, the only runtime dependency the RP has is the C/C++ runtime library already present on the host machine (primarily because the operating system itself uses that same runtime).
Next, the RP requires a configuration file (rallypointd_conf.json
) and its certificate store (rallypointd.certstore
). If we're connecting the RP to other peer RPs in a mesh we'd have another file like peers.json
.
Finally, these files - the RP executable and the aforementioned configuration files - need to be placed on the machine where they'll be running. (Well, duh! But its an important point to make.)
Now, normally, you'd simply install the RP using a pre-built installation package you got from us. On Red Hat distros you'd be installing using yum
, on Debian distros you'd be using apt
, and on Windows you'd use the RP installer executable. (We're only going to cover Linux in this document though.)
All the installer really does is copy the files into the right locations, and sets up the RP to run as a daemon. These locations are:
/usr/sbin
for therallypointd
executable/etc/rallypointd
forrallypointd_conf.json
andrallypointd.certstore
We'll ignore the daemon stuff as that's not needed for this Docker business. In fact, on Linux Docker kinda acts like
systemd
would to run the RP as a daemon.
Our Docker image is going to do pretty much the same thing but with a couple of twists.
Dockerfile
When we tell Docker to build our image we're going to point it to a file name Dockerfile
which contains instructions for creating the image. Here's what it looks like for a Dockerized RP to be run on Ubuntu.
FROM ubuntu:latest
COPY rallypointd /usr/sbin/
COPY rallypointd_conf.json /etc/rallypointd/
COPY rallypointd.certstore /etc/rallypointd/
COPY machine-id /etc/
EXPOSE 7443/tcp
CMD ["/usr/sbin/rallypointd"]
Pretty straightforward. Let's go through it...
-
The
FROM
line tells Docker what base image to use. As we're building for Ubuntu, we're sayingubuntu:latest
. If we're building for Alpine Linux we'd say something likealpine:latest
. (Remember, a Docker image is not a virtual machine - i.e. it does not have an operating system. Rather, it is targeted for an operating system. Hence, we need aFROM
to essentially tell Docker what platform to build the image for.) -
Then,
COPY rallypointd /usr/sbin/
tells Docker to copy the executable binary namedrallypointd
in the current working directory to/usr/sbin/
inside the image. -
The next two
COPY
lines putrallypointd_conf.json
andrallypointd.certstore
into/etc/rallypointd
inside the image. -
The
COPY machine-id /etc/
tells Docker to copy the local file namedmachine-etc
into the/etc
directory inside the image. This is important because our code needs a unique machine identifier on which to base it's licensing. In Linux-world, that unique identifier is based on the contents of the/etc/machine-id
file. -
Next,
EXPOSE 7443/tcp
tells Docker that the RP will be using TCP port7443
inside the running instance of the image - the container - and we're going to want it exposed to the outside world. As you probably know, this is the port that the RP listens on by default for incoming connections. *By the way, you don't strictly need the/tcp
bit here as that's the default protocol for Docker when exposing ports. But, later on we'll be dealing with UDP ports where we will need to specify/udp
. So, as a recommendation, just always whack in that protocol qualifier. -
Finally,
CMD ["/usr/sbin/rallypointd"]
tells Docker the command to execute once the image has been loaded into/as a container is ready to run. In this case we're telling it that the command is/usr/sbin/rallypointd
.
Building The Image
Now that we have our Dockerfile
we need to tell Docker to build it with a resulting image name of rts-rallypointd
. Also, pretty simple.
In the directory where Dockerfile
is located, we're going to need the files it's referencing:
machine-id
rallypointd
rallypointd.certstore
rallypointd_conf.json
By the way, in this configuration
rallypointd_conf.json
is just the stock configuration file we ship with the code. Same goes forrallypointd.certstore
. Bear this in mind for the discussion later on about Using Volumes.
A directory listing something like this:
$ ls -lsa
total 4196
4 drwxrwxr-x 2 builder builder 4096 Dec 23 18:42 .
4 drwxrwxr-x 8 builder builder 4096 Dec 23 17:07 ..
4 -rw-rw-r-- 1 builder builder 465 Dec 23 18:46 Dockerfile
4 -rw-rw-r-- 1 builder builder 37 Dec 23 18:49 machine-id
4168 -rwxrwxr-x 1 builder builder 4267744 Dec 23 18:49 rallypointd
4 -rw-r--r-- 1 builder builder 3314 Dec 23 18:42 rallypointd.certstore
4 -rw-r--r-- 1 builder builder 2353 Dec 23 18:48 rallypointd_conf.json
To build it, we'll issue the following:
$ sudo docker build -t rts-rallypointd .
If all goes well, our output will look as follows:
Sending build context to Docker daemon 4.28MB
Step 1/7 : FROM ubuntu:latest
latest: Pulling from library/ubuntu
6e3729cf69e0: Pull complete
Digest: sha256:27cb6e6ccef575a4698b66f5de06c7ecd61589132d5a91d098f7f3f9285415a9
Status: Downloaded newer image for ubuntu:latest
---> 6b7dfa7e8fdb
Step 2/7 : COPY rallypointd /usr/sbin/
---> f9837f3560d2
Step 3/7 : COPY rallypointd_conf.json /etc/rallypointd/
---> e6a861178466
Step 4/7 : COPY rallypointd.certstore /etc/rallypointd/
---> d0ee811e51a5
Step 5/7 : COPY machine-id /etc/
---> dbbfac566004
Step 6/7 : EXPOSE 7443
---> Running in 085ef742ac57
Removing intermediate container 085ef742ac57
---> ec2c324fadab
Step 7/7 : CMD ["/usr/sbin/rallypointd"]
---> Running in c2dad994b12d
Removing intermediate container c2dad994b12d
---> e7d30ad24a96
Successfully built e7d30ad24a96
Successfully tagged rts-rallypointd:latest
Running The Image
Yay! We have an image, now to run it.
If we do this we'll get a Rallypoint running in front of our very eyes:
$ sudo docker run rts-rallypointd
---------------------------------------------------------------------------------
Rallypoint version 1.233.9073 [RELEASE] for linux_x64
Copyright (c) 2019 Rally Tactical Systems, Inc.
Build time: Nov 11 2022 @ 18:00:01
---------------------------------------------------------------------------------
2022-12-24 03:08:06.044 [1/0x7fbed8216740- main] I/main: loading configuration from '/etc/rallypointd/rallypointd_conf.json'
2022-12-24 03:08:06.044 [1/0x7fbed8216740- main] W/main: configuration did not specify an id - generated '3242f9092660.{f1023cca-3cd6-54e3-482c-87c33fa5ac50}'
2022-12-24 03:08:06.044 [1/0x7fbed8216740- main] I/CertStore: Loading '/etc/rallypointd/rallypointd.certstore'
.
.
.
.
.
Yeah, that's not really what we want. We want this thing to run unseen in the background. We might say we want it to run as a daemon
(you know, like it would if we'd installed it as previously described.) We certainly don't want all that output scrolling by in our terminal window. Also, we want it to keep running when we close the terminal down.
So, we tell docker to daemonize it with the -d
command-line parameter:
$ sudo docker run -d rts-rallypointd
9973a63375b88322766f245844074010592d5f4d0a6dfc5c04dc01b0d9b1a34b
Now we have a container named 9973a63375b88322766f245844074010592d5f4d0a6dfc5c04dc01b0d9b1a34b
which is an instance of the rts-rallypointd
image we made. It's running away in the background all happy in it's own little world.
By the way, the
-d
doesn't stand fordaemonize
, it stands fordetach
. Butdaemonize
sounds cooler. It just means to run the thing in the background.
We can stop it from running by issuing:
$ sudo docker container stop 9973a63375b88322766f245844074010592d5f4d0a6dfc5c04dc01b0d9b1a34b
Phew! That container ID is naaaaasty! What a pain to try to type all that stuff in - right! There's an easier way - using the container's name.
To make your life just that teensy bit easier, Docker will automagically generate a name for your container. You can see this name just listing running container(s) as follows:
$ sudo docker ps
9973a63375b8 rts-rallypointd "/usr/sbin/rallypoin…" 6 minutes ago Up 6 minutes 7443/tcp compassionate_mcnulty
See how that gigantic container ID has been truncated in the display? Also, see the name compassionate_mcnulty
at the end of the line? That's the automagic name given to the container. If you stop the container and rerun the image as a new container, Docker will make a new name for you (they're kinda cute at times).
With this info in hand, you can stop the container with:
$ sudo docker stop compassionate_mcnulty
That's a little easier we think.
But Does It Actually Work?
Well ..... no! At least not in the sense that Engage applications will be able to actually connect to the RP.
While we've fired up an image with our RP, and the RP is running in the container, and the RP is listening inside the container on TCP port 7443
, we won't see anything related to port 7443
outside the container! Basically, the RP is firewalled away inside the container and nothing can get to it. (Those Engage clients are going to feel very lonely.)
We can verify this as follows - assuming our container is running:
$ netstat -an | grep 7443
Nada! That's because we need to tell Docker that the EXPOSE
'd port we mentioned in Dockerfile
needs to be mapped outside the container.
So, let's first stop our container:
$ sudo docker stop compassionate_mcnulty
Then, start a new container but add a port mapping for 7443:
$ sudo docker run -d -p 7443:7443 rts-rallypointd
And see if 7443
shows up:
$ netstat -an | grep 7443
tcp 0 0 0.0.0.0:7443 0.0.0.0:* LISTEN
tcp6 0 0 :::7443 :::* LISTEN
It does! That -p 7443:7443
portion of the command line tells Docker "map the external (host machine) port 7443
to the container port 7443
". Finally the Engage clients or other RP peers can connect to our guy!
If we wanted our RP to be accessed from the outside world on port 31442
, our instruction would be -p 31442:7443
. That's a superb feature given that you can run multiple containers (instances of the same image). So, in the case of the RP, you can run multiple containers, all using 7443
on the inside but accessed by unique port numbers on the outside. For example, let's run 4 Rallypoints on the same machine - all accessed on different ports - maybe to serve 4 different organizations or business units that don't want to use the same RP instance but want to use the same physical machine:
$ sudo docker run -d -p 23000:7443 rts-rallypointd
$ sudo docker run -d -p 23001:7443 rts-rallypointd
$ sudo docker run -d -p 23002:7443 rts-rallypointd
$ sudo docker run -d -p 23003:7443 rts-rallypointd
We have 4 containers running:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f04eaf4a597b rts-rallypointd "/usr/sbin/rallypoin…" 11 seconds ago Up 10 seconds 0.0.0.0:23003->7443/tcp, :::23003->7443/tcp condescending_kirch
1dc65d2eea03 rts-rallypointd "/usr/sbin/rallypoin…" 12 seconds ago Up 11 seconds 0.0.0.0:23002->7443/tcp, :::23002->7443/tcp youthful_wiles
214930f594b4 rts-rallypointd "/usr/sbin/rallypoin…" 12 seconds ago Up 11 seconds 0.0.0.0:23001->7443/tcp, :::23001->7443/tcp silly_wilbur
c0d0859d4b60 rts-rallypointd "/usr/sbin/rallypoin…" 13 seconds ago Up 12 seconds 0.0.0.0:23000->7443/tcp, :::23000->7443/tcp elastic_snyder
If we look for port 7443
, we won't see any because our containers were told to map 23000
- 23003
as their outside ports:
$ netstat -an | grep 7443
But if we change that grep
business to look for ports starting with 2300
:
$ netstat -an | grep 2300
tcp 0 0 0.0.0.0:23000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:23001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:23002 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:23003 0.0.0.0:* LISTEN
tcp6 0 0 :::23000 :::* LISTEN
tcp6 0 0 :::23001 :::* LISTEN
tcp6 0 0 :::23002 :::* LISTEN
tcp6 0 0 :::23003 :::* LISTEN
How cool is that!? We have 4 isolated Rallypoints on the same machine serving on 4 separate TCP ports.
Getting The Image
Awesome! We have an image. But where the hell is it??? It certainly isn't in the directory we're in? That's because Docker maintains an internal database of images on the local machine. Our image is stored there. If we want to get it into a file so we can share it with others, drop it on to other machines, and so on; we're going to have to save it. [Well, it's obviously already saved. We want to save it to a file.]
This is done with Docker's save
command. But this one is a little weird because it defaults to outputting the binary of the image to the terminal. However - at least on Ubuntu - it refuses to do that. Instead, you're instructed to give it a file name to use as output. But that doesn't work either. Oy vey.
The way to get it to work is to redirect the output to a file. Let's say we want a file named rts-rallypointd-ubuntu-x64.img
. Our instruction is:
$ sudo docker save rts-rallypointd:latest > rts-rallypointd-ubuntu-x64.img
This will give us a file of around ... 84MB!! (That's simply crazy big for us minimalists here at RTS.) So, let's run it through gzip
to compress it as follows:
$ sudo docker save rts-rallypointd:latestt | gzip > rts-rallypointd-ubuntu-x64.img.tar.gz
The resulting file will be substantially smaller - coming in at around 31MB.
Here's what that directory listing looks like:
$ ls -lsa
total 117720
4 drwxrwxr-x 2 builder builder 4096 Dec 23 20:10 .
4 drwxrwxr-x 8 builder builder 4096 Dec 23 17:07 ..
4 -rw-rw-r-- 1 builder builder 465 Dec 23 18:46 Dockerfile
4 -rw-rw-r-- 1 builder builder 37 Dec 23 18:49 machine-id
4168 -rwxrwxr-x 1 builder builder 4267744 Dec 23 18:49 rallypointd
4 -rw-r--r-- 1 builder builder 3314 Dec 23 18:42 rallypointd.certstore
4 -rw-r--r-- 1 builder builder 2353 Dec 23 18:48 rallypointd_conf.json
82644 -rw-rw-r-- 1 builder builder 84626432 Dec 23 20:09 rts-rallypointd-ubuntu-x64.img
30880 -rw-rw-r-- 1 builder builder 31620785 Dec 23 20:10 rts-rallypointd-ubuntu-x64.img.tar.gz
31MB might be a whole lot better than 84MB but it still gives us the willies. Check out the size of the biggest thing we're embedding in the image -
rallypointd
. That thing is only a little over 4MB. And, by the way, when distributed as an RPM or DEB installation package, it comes in at a whopping 1.7MB! But ... whatever ... Docker is cool so we'll live with this blatant, horrifying, gluttony.
Some Useful Stuff
Here's some useful things to us that may be useful to you.
Capturing Log Output
All of our software uses logging extensively. Those logs typically go to the standard output device (STDOUT
) which is general your terminal window/screen. However, when the container is running in the background, all that logging output stays inside the container. We want to see it.
Now, when our software (and any software for that matter) runs as a Linux daemon, STDOUT
is redirected to the journaling system of the OS. And you can view those journals either as historical logs or realtime output. Docker can redirect the container's output in a similar fashion as follows with the --log-driver=journald
argument:
$ sudo docker run --log-driver=journald -d -p 7443:7443 rts-rallypointd
Now you can see the log just as you always do using journalctl
as follows:
$ sudo journalctl -f
Getting Into The Container
If you actually want to get inside the container and use it as if you were using a regular terminal shell - aking to doing something like ssh compassionate_mcnulty
, tell Docker to execute a command inside the container interactively. For a shell like bash
, do the following:
$ sudo docker exec -it compassionate_mcnulty bash
There's actually some cool stuff to see in there - especially as far as Linux kernel process isolation goes. Check this out: we'll run a container with docker run
. get the running Docker process/container list with docker ps
to see the container name (jolly_galois
in this case). Then we'll open a bash
shell inside the container with docker exec
.
Once inside, we'll get a directory listing with ls -lsa
, a quick top
view, and, finally, a process list with ps
.
When we're done, we'll exit
and be returned to the host OS prompt.
Notice some interesting stuff ...
- Your shell prompt is
root@<container_id>
- you'll always beroot
in the container. - The file system inside the container pretty much looks like a regular Linux file system.
- We see that
top
shows zero users and only 3 tasks (the host was actually running 417 tasks). We confirmed this withps
.
$ sudo docker run --log-driver=journald -d -p 7443:7443 rts-rallypointd
e7c31df26f91a56ab9fd0f30912658f28d9f7f8e09e3f4ff45b08dc12ddf5575
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e7c31df26f91 rts-rallypointd "/usr/sbin/rallypoin…" 6 seconds ago Up 5 seconds 0.0.0.0:7443->7443/tcp, :::7443->7443/tcp jolly_galois
$ sudo docker exec -it jolly_galois bash
root@e7c31df26f91:/# ls -lsa
total 56
4 drwxr-xr-x 1 root root 4096 Dec 24 13:02 .
4 drwxr-xr-x 1 root root 4096 Dec 24 13:02 ..
0 -rwxr-xr-x 1 root root 0 Dec 24 13:02 .dockerenv
0 lrwxrwxrwx 1 root root 7 Nov 30 02:04 bin -> usr/bin
4 drwxr-xr-x 2 root root 4096 Apr 18 2022 boot
0 drwxr-xr-x 5 root root 340 Dec 24 13:02 dev
4 drwxr-xr-x 1 root root 4096 Dec 24 13:02 etc
4 drwxr-xr-x 2 root root 4096 Apr 18 2022 home
0 lrwxrwxrwx 1 root root 7 Nov 30 02:04 lib -> usr/lib
0 lrwxrwxrwx 1 root root 9 Nov 30 02:04 lib32 -> usr/lib32
0 lrwxrwxrwx 1 root root 9 Nov 30 02:04 lib64 -> usr/lib64
0 lrwxrwxrwx 1 root root 10 Nov 30 02:04 libx32 -> usr/libx32
4 drwxr-xr-x 2 root root 4096 Nov 30 02:04 media
4 drwxr-xr-x 2 root root 4096 Nov 30 02:04 mnt
4 drwxr-xr-x 2 root root 4096 Nov 30 02:04 opt
0 dr-xr-xr-x 492 root root 0 Dec 24 13:02 proc
4 drwx------ 2 root root 4096 Nov 30 02:07 root
4 drwxr-xr-x 5 root root 4096 Nov 30 02:07 run
0 lrwxrwxrwx 1 root root 8 Nov 30 02:04 sbin -> usr/sbin
4 drwxr-xr-x 2 root root 4096 Nov 30 02:04 srv
0 dr-xr-xr-x 13 root root 0 Dec 24 13:02 sys
4 drwxrwxrwt 1 root root 4096 Dec 24 13:02 tmp
4 drwxr-xr-x 1 root root 4096 Nov 30 02:04 usr
4 drwxr-xr-x 11 root root 4096 Nov 30 02:07 var
root@e7c31df26f91:/# top -b
top - 13:03:25 up 2 days, 9:36, 0 users, load average: 0.15, 0.18, 0.12
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 1.7 sy, 0.0 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3889.8 total, 258.3 free, 2422.3 used, 1209.2 buff/cache
MiB Swap: 3898.0 total, 1941.1 free, 1956.9 used. 1120.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 755868 6732 6132 S 0.0 0.2 0:00.07 main
25 root 20 0 4624 3580 3016 S 0.0 0.1 0:00.02 bash
34 root 20 0 7180 2772 2444 R 0.0 0.1 0:00.00 top
root@e7c31df26f91:/# ps -efl
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S root 1 0 0 80 0 - 188967 hrtime 13:02 ? 00:00:00 /usr/sbin/rallypointd
4 S root 25 0 0 80 0 - 1156 do_wai 13:03 pts/0 00:00:00 bash
0 R root 35 25 0 80 0 - 1765 - 13:03 pts/0 00:00:00 ps -efl
root@e7c31df26f91:/# exit
exit
$
Using Volumes
The image we built in our article so far has rallypointd_conf.json
and rallypointd.certstore
located inside the image - and therefore the resulting containers launched from it. That presents a bit of a problem because if we need to change those files - maybe add some extra configuration or change certificates - we'd have to build a whole new image.
"Why?" you say? Well, remember that a container is a running instance of an image and when the container is stopped, all the data updated in that container is tossed away. So, even if you did think to do that cool bash shell thing we described earlier and made the changes you wanted; you'd still lose it all if/when the container is stopped.
The answer is pretty simple though - store the data outside the container and have the software inside the container reference the stuff on the outside. There's a few ways to go at this but by far the simplest is Docker Volumes.
A Volume is simply a storage location on the host operating system that is managed by Docker and mapped into the container as a mounted directory - making it part of the file system inside the container.
As a best-practises decision, we'll use /rts-shared
as the root directory for data and such shared by RTS components. So, let's go ahead and create a volume named rts-shared
$ sudo docker volume create rts-shared
To see a list of your volumes:
$ sudo docker volume ls
DRIVER VOLUME NAME
local rts-shared
To get more detailed information:
$ sudo docker volume inspect rts-shared
[
{
"CreatedAt": "2022-12-20T09:43:59-08:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/snap/docker/common/var-lib-docker/volumes/rts-shared/_data",
"Name": "rts-shared",
"Options": {},
"Scope": "local"
}
]
Here, the Mountpoint
is especially interesting - it tells us where this volume's data is stored on the host's file system. This means that if we want to exchange data with the container(s) from the host machine, we'd update files in that directory on the host.
Our first candidates for such data are rallypointd_conf.json
and rallypointd.certstore
. Instead of burning them into the image - and therefore every resulting container of that image - we'll put them somewhere under /var/snap/docker/common/var-lib-docker/volumes/rts-shared/_data
on the host file system.
Now, we'd kinda like to preserve our traditional directory structure as best we can so we'll make a directory named etc/rallypointd
under _data
and then copy those two files into it. Something like this:
$ sudo mkdir -p "/var/snap/docker/common/var-lib-docker/volumes/rts-shared/_data/etc/rallypointd"
$ sudo cp "rallypointd_conf.json" "/var/snap/docker/common/var-lib-docker/volumes/rts-shared/_data/etc/rallypointd/"
$ sudo cp "rallypointd.certstore" "/var/snap/docker/common/var-lib-docker/volumes/rts-shared/_data/etc/rallypointd/"
Of course, we're going to want to get that stuff out of our Dockerfile and rebuild it. Here's what it looks like now:
FROM ubuntu:latest
COPY rallypointd /usr/sbin/
COPY machine-id /etc/
EXPOSE 7443/tcp foobar
CMD ["/usr/sbin/rallypointd", "-cfg:/rts-shared/rallypointd/rallypointd_conf.json"]
To run it we simply need to tell Docker to mount that volume into the container with the -v
option:
$ sudo docker run --log-driver=journald -d -p 7443:7443 -v rts-shared:/rts-shared rts-rallypointd
The
-v
argument is pretty straightforward. It says to mount the volume namedrts-shared
into the container's file system at/rts-shared
- creating a new directory off the container's root directory named/rts-shared
.
Now, when our RP references this shared area, it would do so from /rts-shared
as its root shared directory. For example, you'll recall we made /etc/rallypointd
on the rts-shared
volume earlier and dropped in those files. Hence, from the outside, rallypointd_conf.json
would be found at /var/snap/docker/common/var-lib-docker/volumes/rts-shared/_data/etc/rallypointd/rallypointd_conf.json
; while inside the container it would be found at /rts-shared/etc/rallypointd/rallypointd_conf.json
.
This means a couple of minor changes to that RP configuration file. Specifically, the file locations that reference /etc
would now be /rts-shared/etc
.
Something like this:
"certStoreFileName":"/etc/rallypointd/rallypointd.certstore"
...would change to:
"certStoreFileName":"/rts-shared/etc/rallypointd/rallypointd.certstore"
The same goes for anything else the RP would normally find at /etc
- just add /rts-shared
at the front of the path and you're golden.
Notice the
-cfg
parameter introduced in theCMD
inDockerfile
above. Unless an RP is told otherwise, it reads it's core configuration from/etc/rallypointd/rallypointd_conf.json
. But, remember, we moved that file out of the container and onto a volume which, in turn, was mapped into a directory named/rts-shared
. So, just like we need to change the paths in the configuration file, we also need to change the path to the configuration file itself. And we need to tell the RP that. So, instead of it looking in/etc/rallypointd/rallypointd_conf.json
we tell it to find it at/rts-shared/etc/rallypointd/rallypointd_conf.json
.
Networking
Networking plays a large role in our software as you'd well imagine. So let's take a liitle look at networking with our RP.
TODO
Hey! this is a work in progress. Here's what's coming in the next exciting installments.
- Mesh between containers
- Bundle multiple RTS processes inside the same container