Running the bot using docker - TF2Autobot/tf2autobot GitHub Wiki

Table of Contents

What is Docker ^

According to Wikipedia: Docker is a set of the platform as service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.

You can run tf2autobot anywhere, on any operating system that supports Docker, without having to configure additional instances, additional software or having to set up networking.

To download Docker, head over Docker's download page. It supports MacOS, Windows and Linux. Please remember that for Windows, Hyper-V and Containers need to be enabled in Windows Features.

Use Cases

Minimal Ops Knowledge

With tf2autobot on Docker, you can learn quickly how to power up a bot without problems and with minimal sysadmin/ops knowledge. You just install Docker and run a command in the terminal and your bot is live.

Running in Kubernetes

For advanced users, you can orchestrate multiple bots dynamically, both network and workload, using Kubernetes. It can be scheduled as Pod or a StatefulSet replica with 1 replica.

Running a container ^

Starting with tf2autobot v3.5.0, Docker images are available and you can run the latest, lightweight version of tf2autobot (we recommend using a specific version to avoid breaks):

$ docker run tf2autobot/tf2autobot:latest-18.17.0-alpine

Congrats! You have created your first tf2autobot container that runs, but is not configured and might throw errors. Follow the instructions to know how to persist the data and how to configure the container.

Persisting the data

The only downside of tf2autobot on Docker is that you need to persist data between your operating system and the container. If you run the container and then you stop it, your /app/files/<STEAM_ACCOUNT_NAME>/ folder created within the container will no longer exist and will be re-regenerated every time.

To overcome this, we will tell the container to mount a volume at the /app/files/<STEAM_ACCOUNT_NAME>/ folder with a given path in our operating system (like on Desktop for example).

You can use the -v flag to specify a mirror between a local path (the path in your PC or on the server) and the container path (which should be, obviously, /app/files/<STEAM_ACCOUNT_NAME>/).

We assume the following example, running the command while you're on Desktop and tf2autobot_data/123456/ folder exists, and the account name is 123456:

$ docker run \
    -v $(pwd)/tf2autobot_data/123456:/app/files/123456 \
    tf2autobot/tf2autobot:latest-18.17.0-alpine

Make sure to replace 123456 with your account name, as explained in the configuration docs.

The $(pwd) at the start of the pathname tells Docker to look for the current context - the current folder we run the command from.

Every time the container or you will (manually) change files within /app/files/123456/, they will persist and you can keep using the bot without fearing the configuration will get lost.

Configure the container ^

Environment variables

The recommended way of configuring the container is by using environment variables. In Docker, you can attach environment variables at runtime by using the -e flag:

$ docker run \
    -v $(pwd)/tf2autobot_data/123456:/app/files/123456 \
    -e STEAM_ACCOUNT_NAME=some-account-name \
    -e STEAM_PASSWORD=mysecurepassword \
    -e STEAM_SHARED_SECRET="agdgwegdgawfagxafagfkagusbuigiuefh==" \
    tf2autobot/tf2autobot:latest-18.17.0-alpine

The environment variables are identical to the ones specified in the example .env file.

ecosystem.json

According to the configuration docs, you will need to persist an ecosystem.json file in /app to configure multiple bots. Like we learned how to persist the data with the tf2autobot_data folder, we can as well persist files:

$ docker run \
    -v $(pwd)/tf2autobot_data/123456:/app/files/123456 \
    -v $(pwd)/tf2autobot_data/ecosystem.json:/app/ecosystem.json \
    tf2autobot/tf2autobot:latest-18.17.0-alpine

.env

Just like ecosystem.json, you can as well configure an .env file:

$ docker run \
    -v $(pwd)/tf2autobot_data/123456:/app/files/123456 \
    -v $(pwd)/tf2autobot_data/.env:/app/.env \
    tf2autobot/tf2autobot:latest-18.17.0-alpine

Running multiple bots

It's highly recommended to run one bot per container to keep the Docker base consistency guidelines. You can however go ahead and configure the ecosystem.json file and have more than one bot per container, but you will need to configure more volumes to mount:

$ docker run \
    -v $(pwd)/tf2autobot_data/11111:/app/files/11111 \
    -v $(pwd)/tf2autobot_data/22222:/app/files/22222 \
    -v $(pwd)/tf2autobot_data/ecosystem.json:/app/ecosystem.json \
    tf2autobot/tf2autobot:latest-18.17.0-alpine

Versioning ^

Each Github Release for tf2autobot will automatically create tags for Docker images. The tags follow this pattern:

[git_ref]-[node_version]
  • [git_ref] is the git reference tag, like 1.2.0 for releases, latest for master and the git commit sha for commits
  • [node_version] is the Node.js version from their Docker repo.

>= 3.5.0

The following [node_version] values are used to build images for 3.5.0 and beyond:

  • 18.17.0-alpine
  • 18.17.0-buster
  • 18.17.0-buster-slim
  • 18.17.0-stretch
  • 18.17.0-stretch-slim

Examples

$ docker run tf2autobot/tf2autobot:5.11.0-18.17.0-alpine
# using `5.11` instead of any `5.11.x` will install latest `5.11.x` version
$ docker run tf2autobot/tf2autobot:5.11-18.17.0-alpine

I'm new to this, which one to pick?

If you do not know which one to pick, pick 18.17.0-alpine which is the lightweight one: tf2autobot/tf2autobot:5.11-18.17.0-alpine (for example).