Using Caching Volumes - testflows/TestFlows-GitHub-Hetzner-Runners GitHub Wiki

Caching volumes provide persistent storage for your GitHub Actions runners, allowing you to cache Docker images, APT packages, Python virtual environments, build artifacts, and other data between workflow runs. This can significantly speed up your CI/CD pipelines by reducing download times and build times, as well as improve pipeline stability by minimizing dependency on repeatedly accessing external resources during each workflow run.

โ— Warning: Volumes are not deleted automatically and can accumulate over time. Depending on your specific use case, the number of created volumes can be very large. Read this section carefully to understand how to properly configure and use caching volumes. Use the Hetzner console and the volumes list and volumes delete commands to manage them and control costs.

๐Ÿ“š Table of contents

๐Ÿงญ Overview

Caching volumes are Hetzner Cloud volumes that are automatically created and attached to your runners based on your specified runner labels. Each runner can have from 1 up to 16 caching volumes, with each volume supporting up to 10TB of storage. The volumes persist between workflow runs.

The volume label you specify in your runner configuration references a group of actual Hetzner volumes. When a new runner starts, the service automatically:

  • Attaches an existing matching volume if one is available
  • Creates a new volume if no matching volume exists or is available

This automatic volume management ensures that your runners always have the necessary storage available while maintaining persistence of your cached data between workflow runs. Multiple physical Hetzner volumes can share the same cache volume name, forming a distributed cache system. This system provides eventual consistency: when multiple runners use the same cache volume name, they will eventually store the same cached data, assuming the workloads are deterministic (same inputs produce same outputs).

The cache volumes are cumulative between runs unless explicitly cleaned up. You can perform cleanup either:

  • Implementing an automatic cleanup procedure in your workflow
  • Manually deleting the volumes using the volumes delete command

Because volumes are assigned to jobs using runner labels, you have very flexible control over which volumes will be used by each job. This flexibility allows you to:

  • Share volumes between related jobs for efficient caching
  • Isolate volumes for sensitive or untrusted workflows
  • Control volume sizes per job requirements
  • Manage volume lifecycle independently for different job types

๐Ÿ“Œ Facts

  1. You can add from 1 to 16 volumes.
  2. Volume size ranges from 10GB to 10240GB (10TB). If the volume size is not specified, the default is 10GB (the minimum).
  3. Volumes are added using a runner label in the format: volume-<name_without_dashes>[-<size>GB] (size is optional).
  4. If the cache volume name starts with the cache prefix (e.g. volume-cache, volume-cache_artifacts), it is automatically used as the default caching volume.
  5. Volumes can be resized:
    • Automatically, by updating a volume label (e.g. changing -10GB to -100GB)
    • Manually, using the volumes resize command
  6. Volume size is not considered during the attach process:
    • Smaller volumes can be attached to jobs requesting larger sizes and are automatically resized
    • Larger volumes can be attached to jobs requesting smaller sizes
  7. New volumes are created automatically when no existing volume is available for the given label.
  8. Volumes are attached only when a new server is created. Recyclable servers keep their volumes attached between jobs.
  9. The volume name defines a group of physical Hetzner volumes.
  10. Volumes are location-, architecture-, OS flavor-, and OS version-dependent, based on the attributes of the server image. For example, a volume created for servers in in-nbg1 cannot be used by servers in in-fsn1.
  11. Volumes are never deleted automatically. You must manually delete unused volumes using the volumes delete command.
  12. The maximum number of volumes per volume-name group depends on the maximum number of concurrent jobs requesting the same volume label.
  13. All volumes are mounted to the /mnt/<volume_name> directory, with default ownership set to ubuntu:ubuntu.
  14. The file /etc/hetzner-volumes contains metadata about all volumes mounted on the runner.

๐Ÿ“ Naming conventions

When defining caching volumes, follow these naming rules:

  1. No dashes in volume names: Volume names must not contain dashes (-) except in the required volume- prefix. For example, this is valid:

    • โœ… volume-cache

    But the following is not allowed:

    • โŒ volume-my-cache

    Instead, use underscores (_) to separate words:

    • โœ… volume-cache_artifacts
  2. Prefix must be `volume-`: All volume labels must begin with the volume- prefix. This prefix identifies the label as a volume declaration and is required.

  3. Optional size suffix: You can optionally specify a volume size in gigabytes by appending -<size>GB to the label:

    • โœ… volume-cache_apt-100GB
    • โœ… volume-python_wheels (defaults to 10GB)
  4. Special handling for cache-prefixed names:

    The cache volume whose name starts with cache is treated as a special default caching volume. It is automatically used for caching. See: Volumes starting with the cache prefix.

๐Ÿ”„ Volume lifecycle

Caching volumes follow a defined lifecycle tied to job execution and runner provisioning:

Creation:

New volumes are created automatically when no existing volume with the requested label is available. The creation process is triggered when a new runner starts and no matching volume is found.

Attachment:

Volumes are attached only when a new runner server is created. If a server is recyclable, it retains its attached volumes between jobs. The attachment process happens after volume creation or when an existing volume is found.

Detachment:

Volumes are detached automatically when a server is deleted. After detachment, they become available to be automatically attached to a new server if they were not deactivated using the volumes deactivate command.

Deactivation and activation:

Volumes can be deactivated to prevent them from being attached to new servers, and reactivated when needed. This is useful for manual volume management. Use the volumes deactivate command (see Volume deactivation) and volumes activate command (see Volume activation).

Resizing:

Volumes can be resized in two ways:

  • Automatically by updating the size in the volume label (e.g., from -10GB to -50GB)
  • Manually using the volumes resize command

Note that you can only increase volume size, not decrease it. See Volume resizing for more details.

Reuse:

Volumes are reused across jobs and runners, grouped by:

  • Cache volume name (the name part in the volume-<name> label)
  • Architecture (x86, arm)
  • Image (OS flavor and version)
  • Location (e.g., in-fsn1, in-nbg1)

This grouping ensures compatibility and optimal performance.

Deletion:

Volumes are never deleted automatically. To remove a volume, use the volumes delete command. Always ensure you have a backup of important data before deletion.

๐ŸŽฏ Volume Assignment

The volume assignment system provides granular control over how caching volumes are used in your workflows:

  1. Job-Level Control: Each job can specify exactly which volumes it needs through runner labels
  2. Size Management: Volume sizes can be tailored to specific job requirements
  3. Isolation: Different jobs can use different volumes to prevent cache contamination or thrashing (when jobs overwrite each other's cache)
  4. Sharing: Related jobs can share volumes to maximize cache efficiency
  5. Lifecycle: Volumes can be managed independently for different job types

For example, you might have:

  • A build job using a large volume for Docker images
  • A test job using a smaller volume for test artifacts
  • A deployment job using a separate volume for deployment artifacts
  • All build, test, and deployment jobs using the same volume

This flexibility ensures you can optimize your caching strategy for each specific use case.

๐ŸŽจ Design Patterns

When designing your volume assignment strategy, consider these common patterns:

  1. Shared Volume Pattern
    • Use the same volume label between jobs
    • Cache accumulates data for all jobs
    • Best for related jobs that benefit from shared dependencies
    • Example: Multiple build jobs sharing Docker images
  2. Isolated Volume Pattern
    • Use different volume names for different jobs
    • Makes volumes more specific and easier to manage
    • Best for jobs with unique caching needs but will result in more physical volumes
    • Example: Separate volumes for build and test artifacts
  3. Hierarchical Cache Pattern
    • Use the same volume with structured subdirectories
    • Organize cache by version, PR, or job name
    • Example: /mnt/cache/<version>/<PR> or /mnt/cache/<job_name>
    • Best for complex workflows needing both isolation and sharing
โœ‹ Note:

The volume assignment strategy will be highly use case specific. Choose the pattern that best fits your project's needs:

  • Consider job relationships and dependencies
  • Evaluate cache reuse opportunities
  • Plan for cache cleanup and volume management
  • Account for security and isolation requirements

โœ๏ธ Adding volume

To use caching volumes in your workflow job, add the volume label to your runner specification. The label format is volume-{name_without_dashes}[-{size}GB], where:

  • {name} is the name of your cache volume (defines volume group). No dashes are allowed โ€” use underscores instead.
    • โœ… volume-this_is_my_custom_volume_name-20GB โ€” correct
    • โŒ volume-this-is-my-custom-volume-name-20GB โ€” incorrect
  • {size}GB (optional) is the size in GB (e.g., 20GB), if not specified the default is 10GB
# Example of a job using a caching volume
jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-20GB]
โœ‹ Note: Jobs that use the same volume name will share the same volume group irrespective of the volume size. See Volume lifecycle for more details.
โœ‹ Note: If volume size is not specified, the default size is 10GB. See Volume resizing for more information about volume sizes.

For the example below, both build and archive jobs will use the same volume-name group cache. If build and archive jobs run in parallel, and there are enough runners available, then different physical volumes will be used for each job.

# Example of jobs sharing the same volume group
jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache]

  archive:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-20GB]

๐Ÿ“ Volume mount points

All volumes are mounted to the /mnt/<volume_name> directory. By default, the ownership is set to ubuntu:ubuntu.

For example:

  • volume-cache โ†’ /mnt/cache
  • volume-artifacts โ†’ /mnt/artifacts
  • volume-cache_builds โ†’ /mnt/cache_builds

๐Ÿงฉ Multiple volumes

You can specify multiple caching volumes for a single job runner by adding multiple volume labels:

jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-10GB, volume-artifacts-20GB]

โš™๏ธ The /etc/hetzner-volumes

The /etc/hetzner-volumes file is automatically created that defines all volumes mounted on the runner. You can cat this file in your workflow to save this information.

For example, if a volume gets invalid data you will be able to find the volume name and id to delete it.

# Example of checking mounted volumes
run: |
     if [ -f "/etc/hetzner-volumes" ]; then
         echo "Hetzner volumes"
         cat /etc/hetzner-volumes
     fi
# Example output showing volume details
name,id,size,mount,device,used,free,usage
cache-x86-ubuntu-unknown-1747486837257882,102587536,20GB,/mnt/cache,/dev/disk/by-id/scsi-0HC_Volume_102587536,16G,2.9G,85%

๐Ÿท๏ธ Volumes starting with the cache prefix

The first volume in the list (sorted using Python sorted() function), that starts with cache prefix is used for default caching.

Default caching will cache all resources needed for startup and setup.

โœ‹ Note: If you don't want any caching to be enabled by default, avoid using volume names starting with the cache prefix.
โ— Warning: To avoid confusion, it is recommended to have only one volume with cache prefix per job.

For example,

jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache]
jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache_custom_name]

The default caching on the cache volume includes the following:

  1. The /var/cache/apt-archives and /var/lib/apt/lists to cache all APT packages

    f"sudo mkdir -p /mnt/{volume_name}/apt-archives /mnt/{volume_name}/apt-lists /var/cache/apt/archives /var/lib/apt/lists"
    f"sudo mount --bind /mnt/{volume_name}/apt-archives /var/cache/apt/archives"
    f"sudo mount --bind /mnt/{volume_name}/apt-lists /var/lib/apt/lists"
  2. GitHub Actions runner binary

    See the startup script for x64: scripts/startup-x64.sh
    See the startup script for arm64: scripts/startup-arm64.sh
  3. Docker GPG, repository list, and APT packages.

    See docker setup script: scripts/docker.sh

๐Ÿ“ฆ Caching APT packages

To speed up package installation, you can cache downloaded .deb files and APT metadata by bind-mounting directories from a cache volume. See Volume mount points for information about where volumes are mounted.

Caching of APT packages is done automatically during runner setup if one of the volumes starts with the cache prefix.

โ— Warning: If you use a caching volume that starts with the cache prefix, this setup is done automatically โ€” you do not need to perform these steps manually. See Volumes starting with the cache prefix for more information.

Here is an example of how you can do it manually if you don't have any cache prefix volumes:

# Example of setting up APT package caching
- name: Setup APT cache
  shell: bash
  run: |
    if [ -d "/mnt/cache" ]; then
        # Define cache directories
        APT_ARCHIVES="/mnt/cache/apt-archives"
        APT_LISTS="/mnt/cache/apt-lists"

        # Create local and volume cache directories
        sudo mkdir -p "$APT_ARCHIVES" "$APT_LISTS" \
                     /var/cache/apt/archives /var/lib/apt/lists

        # Bind mount volume-backed directories
        sudo mount --bind "$APT_ARCHIVES" /var/cache/apt/archives
        sudo mount --bind "$APT_LISTS" /var/lib/apt/lists

        echo "APT cache directories mounted from volume:"
        echo "  - $APT_ARCHIVES โ†’ /var/cache/apt/archives"
        echo "  - $APT_LISTS โ†’ /var/lib/apt/lists"
    else
        echo "No APT cache volume available, proceeding without caching"
    fi

๐Ÿ Caching Python modules

The easiest way to cache Python modules is by using a virtual environment (venv) and binding a cache volume folder to the venv directory. See Volume mount points for information about where volumes are mounted.

For example:

# Example of setting up Python virtual environment caching
- name: Setup Python cache
  shell: bash
  run: |
    if [ -d "/mnt/cache" ]; then
        # Define Python cache directory
        PYTHON_CACHE_DIR="/mnt/cache/python3.12-venv"
        mkdir -p "$PYTHON_CACHE_DIR" "$PWD/venv"
        sudo mount --bind "$PYTHON_CACHE_DIR" "$PWD/venv"
        echo "Using cached Python venv directory: $PYTHON_CACHE_DIR"
    else
        echo "No Python venv cache directory available, proceeding without caching"
    fi

To create and activate the virtual environment:

- name: Create and activate Python virtual environment
  shell: bash
  run: |
    sudo apt-get install -y python3.12-venv

    echo "Creating and activating Python virtual environment..."
    if [ ! -f venv/bin/activate ]; then
        python3 -m venv venv
    fi

    source venv/bin/activate
    echo "PATH=$PATH" >> "$GITHUB_ENV"

๐Ÿณ Caching Docker

Caching Docker images and build layers can be more or less tricky, depending on your use case. See Volume mount points for information about where volumes are mounted.

Here are some caching techniques that do not work:

  1. Creating symlinks from cache volume directories to /var/lib/docker directories.
  2. Using overlayfs for /var/lib/docker itself does not work as Docker relies on its own use of the overlayfs.
  3. Using plain cp command without the -a argument can break Docker.
  4. Not synchronizing /var/lib/docker/image and /var/lib/docker/overlay2 folders as the image folder contains metadata referencing contents in the overlay2 and expects the overlay2 folder contents to be correct.

Here are some caching techniques that work but not optimal:

  1. Using cp -a for setup and rsync -aH --delete for syncing back, but the initial cp -a copy can be slow depending on the cache size.

๐Ÿฐ Simple Docker caching

The simple use case is when you can get away with just mounting directory located on the caching volume directly to /var/lib/docker.

This works under the following conditions:

  1. During runtime, you don't create very large volumes that live in /var/lib/docker/volumes
  2. During runtime, you don't run containers and write excessively to the container filesystem which Docker implements as a writable layer stored in /var/lib/docker/overlay2 folder.

If one of the above conditions is not meant, it will mean that the size of your caching volume will have to meet the peak image, volume, and filesystem size during jobs runtime. For heavy test regression jobs, this could mean very large caching volumes (e.g. 100GB or more) which could be very expensive especially with high concurrent job counts.

Here is an example of how to setup Docker caching for the simple case,

- name: Setup Docker cache
  shell: bash
  run: |
    if ! systemctl is-active --quiet docker; then
        echo "Docker is not running, skipping Docker cache setup"
        exit 0
    fi

    if [ -d "/mnt/cache" ]; then
        DOCKER_CACHE_DIR="/mnt/cache/docker"
        echo "Using docker cache directory: $DOCKER_CACHE_DIR"

        echo "Stopping Docker to prepare cache"
        sudo systemctl stop docker
        sudo sync

        # Create cache directory if it doesn't exist
        sudo mkdir -p "$DOCKER_CACHE_DIR"

        # Mount the cache directory to /var/lib/docker
        sudo mount --bind "$DOCKER_CACHE_DIR" /var/lib/docker

        sudo sync
        sudo systemctl start docker
    else
        echo "No docker cache directory available, proceeding without caching"
    fi

and here an example syncing Docker job:

- name: Sync Docker cache
  shell: bash
  run: |
    if ! command -v docker >/dev/null; then
        echo "Docker is not installed, skipping cache sync"
        exit 0
    fi

    if [ -d "/mnt/cache" ]; then
        echo "Stopping containers and cleaning up..."
        sudo docker stop $(sudo docker ps -q) || true
        sudo docker rm -fv $(sudo docker ps -a -q) || true

        echo "Removing all Docker volumes..."
        sudo docker volume rm $(sudo docker volume ls -q) || true

        echo "Stopping Docker daemon"
        sudo systemctl stop docker
        sudo sync

        # Since we're using a direct bind mount, no sync is needed
        # The cache is automatically updated as Docker writes to /var/lib/docker

    else
        echo "/mnt/cache not available โ€” skipping Docker cache sync"
    fi

๐Ÿง  Advanced Docker caching

The advanced Docker caching is required when the runtime size is too large to just accommodate the jobs requirements by adjusting caching volume size.

This means you have jobs that hit one of the following conditions that will prevent you from simply mounting cache directory to /var/lib/docker.

  1. During runtime, you create very large volumes that live in /var/lib/docker/volumes
  2. During runtime, you run containers and write excessively to the container filesystem which Docker implements as a writable layer stored in /var/lib/docker/overlay2 folder.

Therefore, you must cache only contents from /var/lib/docker selectively.

For achieving image and build layer caching we must cache the following:

  • /var/lib/docker/image - contains layers metadata (fully)
  • /var/lib/docker/buildkit - contains data for build kits (fully)
  • /var/lib/docker/overlay2 - contains image and container layers (partially)

It is the requirement to partially cache contents of the /var/lib/docker/overlay2 that is tricky.

However, it is possible to achieve efficient caching using the following technique:

  1. The image and buildkit folder can be cp -a from cache and ``rsync``ed back to cache directly as these do not consume much space
  2. Before caching contents of overlay2 folder, all containers must be stopped and removed. Then we need to selectively rsync only the new folders while skipping folders that we mounted during the setup. This has to be done on overlay2 subfolder level.

Here is an example how to setup Docker caching,

- name: Setup Docker cache
  shell: bash
  run: |
    if ! systemctl is-active --quiet docker; then
        echo "Docker is not running, skipping Docker cache setup"
        exit 0
    fi

    if [ -f "/etc/hetzner-volumes" ]; then
        echo "Hetzner volumes"
        cat /etc/hetzner-volumes
    fi

    if [ -d "/mnt/cache" ]; then
        DOCKER_CACHE_DIR="/mnt/cache/docker"
        echo "Using docker cache directory: $DOCKER_CACHE_DIR"

        echo "Stopping Docker to prepare cache"
        sudo systemctl stop docker
        sudo sync

        if [ -d "$DOCKER_CACHE_DIR/overlay2" ]; then
            echo "Restoring overlay2 from cache"
            sudo rm -rf "/var/lib/docker/overlay2"
            targets=$(sudo find "$DOCKER_CACHE_DIR/overlay2" -mindepth 1 -maxdepth 1)
            if [ -z "$targets" ]; then
                echo "โš ๏ธ  No entries found in $DOCKER_CACHE_DIR/overlay2 โ€” skipping"
            else
                for target in $targets; do
                    id=$(basename "$target")
                    echo "Mounting $target to /var/lib/docker/overlay2/$id"
                    sudo mkdir -p "/var/lib/docker/overlay2/$id"
                    sudo mount --bind "$target" "/var/lib/docker/overlay2/$id"
                    echo "/var/lib/docker/overlay2/$id" | sudo tee -a /etc/docker-cache-mounts > /dev/null
                done
            fi
        fi

        for DIR in image buildkit; do
            if [ -d "$DOCKER_CACHE_DIR/$DIR" ]; then
                echo "Restoring $DIR from cache"
                sudo rm -rf "/var/lib/docker/$DIR"
                sudo cp -a "$DOCKER_CACHE_DIR/$DIR" "/var/lib/docker/$DIR"
            fi
        done

        sudo sync
        sudo systemctl start docker
    else
        echo "No docker cache directory available, proceeding without caching"
    fi

Here is an example of syncing Docker cache,

- name: Sync Docker cache
  shell: bash
  run: |
    if ! command -v docker >/dev/null; then
        echo "Docker is not installed, skipping cache sync"
        exit 0
    fi

    if [ -d "/mnt/cache" ]; then
        echo "Stopping containers and cleaning up..."
        sudo docker stop $(sudo docker ps -q) || true
        sudo docker rm -fv $(sudo docker ps -a -q) || true

        echo "Stopping Docker daemon"
        sudo systemctl stop docker
        sudo sync

        echo "Syncing docker folders to cache"
        sudo mkdir -p /mnt/cache/docker

        if sudo test -d "/var/lib/docker/overlay2"; then
            sudo mkdir -p /mnt/cache/docker/overlay2
            targets=$(sudo find "/var/lib/docker/overlay2" -mindepth 1 -maxdepth 1)
            if [ -z "$targets" ]; then
                echo "โš ๏ธ  No entries found in /var/lib/docker/overlay2 โ€” skipping"
            else
                for target in $targets; do
                    id=$(basename "$target")
                    if [ ! -f /etc/docker-cache-mounts ] || ! grep -Fxq "$target" /etc/docker-cache-mounts; then
                        sudo rsync -aH --delete "$target/" /mnt/cache/docker/overlay2/$id/
                    fi
                done
            fi
        fi

        for DIR in image buildkit; do
            sudo rsync -aH --delete /var/lib/docker/$DIR/ /mnt/cache/docker/$DIR/
        done
        sudo sync
    else
        echo "/mnt/cache not available โ€” skipping Docker cache sync"
    fi

๐Ÿ“Š Monitoring

To monitor your cache volumes inside your workflow, here are some useful commands. See Volume mount points for information about where volumes are mounted.

# Example of monitoring Docker cache usage
- name: ๐Ÿ“Š Show Docker disk usage
  if: always()
  run: docker system df

- name: ๐Ÿ“‚ Show cache directory size
  if: always()
  run: du -sh /mnt/cache/docker

- name: ๐Ÿ“ฆ Show Docker images
  if: always()
  run: docker images

- name: ๐Ÿงน Clean up unused images
  if: always()
  run: docker image prune -a -f

- name: ๐Ÿ“‹ Show Hetzner volumes
  if: always()
  run: |
    if [ -f "/etc/hetzner-volumes" ]; then
        echo "Hetzner volumes"
        cat /etc/hetzner-volumes
    fi

- name: ๐Ÿงผ Clean up containers and volumes
  if: always()
  run: |
    # Stop and remove all containers
    echo "Stopping containers and cleaning up..."
    docker stop $(docker ps -q) || true
    docker rm -fv $(docker ps -a -q) || true

    # Remove all Docker volumes
    echo "Removing all Docker volumes..."
    docker volume rm $(docker volume ls -q) || true
โ— Warning: Regularly monitor your cache size and implement cleanup strategies to prevent the cache from growing too large. Always check Hetzner console periodically.

To monitor your existing cache volumes:

github-hetzner-runners list

๐Ÿ“ Volume resizing

All volumes are resized by default either by updating volume label or using the volume resize command. See Volume lifecycle for more information about volume management.

โœ‹ Note: You can only increase volume size. Decreasing volume size is not supported.
โ— Warning: If you want to decrease volume size, you will have to delete and create a new volume.

For example,

jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache]

To resize the volume cache from the default 10GB to 20GB, update the label from volume-cache to volume-cache-20GB and rerun your workflow.

jobs:
  build:
    runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-20GB]

๐Ÿ”ด Volume deactivation and activation

By default, all caching volumes selected using the github-hetzner-runner-volume=active label which is added to each volume during creation. See Volume lifecycle for more information.

You can use volumes deactivate command to change volume label to github-hetzner-runner-volume=inactive which will prevent the volume from being selected for attachment to a new server.

โœ‹ Note: Deactivating a volume does not detach it from any currently bound server.

The recommended procedure to deactivate the volume:

  1. Run the volumes deactivate command.
  2. If the volume is currently bound (check using the volumes list command), delete the server to which the volume is bound to detach it.
usage: github-hetzner-runners volumes deactivate [-h] [-n name] [-v name] [--id id] [--all]

Deactivate volumes. This will prevent the volume to be attached to a new server.

options:
-h, --help            show this help message and exit
-n name, --name name  deactivate all volumes matching name
-v name, --volume name
                        deactivate by volume name
--id id               deactivate by volume id
--all                 deactivate all volumes

๐ŸŸข Volume activation

You can use volumes activate command to change volume label back to github-hetzner-runner-volume=active so that it will be available to be attached during the next scale up cycle after the volume was detached using the volumes deactivate command. See Volume deactivation and Volume lifecycle for more information.

usage: github-hetzner-runners volumes activate [-h] [-n name] [-v name] [--id id] [--all]

Activate volumes. This will allow the volume to be attached to a new server.

options:
-h, --help            show this help message and exit
-n name, --name name  activate all volumes matching name
-v name, --volume name
                        activate by volume name
--id id               activate by volume id
--all                 activate all volumes

๐Ÿ“ƒ Listing volumes

Use the volumes list command to list current volumes.

usage: github-hetzner-runners volumes list [-h] [-n name] [-v name] [--id id] [--all]

List volumes.

options:
-h, --help            show this help message and exit
-n name, --name name  list all volumes matching name
-v name, --volume name
                        list by volume name
--id id               list by volume id
--all                 list all volumes (default if no other options are provided)
Example:
github-hetzner-runners volumes list
20:34:13 ๐Ÿ€ Logging in to Hetzner Cloud
20:34:13 ๐Ÿ€ Getting a list of volumes
status     state,   name, actual name, id, size, location, server, created, format
๐ŸŸข available  active,  cache_pull_100_docker_images, cache_pull_100_docker_images-x86-ubuntu-22.04-1747506174049519, 102588112, 200GB, nbg1, none, 2025-05-17 18:22:57, ext4

๐Ÿ—‘๏ธ Deleting volumes

Use the volumes delete command to delete volumes.

usage: github-hetzner-runners volumes delete [-h] [-n name] [-v name] [--id id] [--all] [-f]

Delete volumes.

options:
-h, --help            show this help message and exit
-n name, --name name  delete all volumes matching name
-v name, --volume name
                        delete by volume name
--id id               delete by volume id
--all                 delete all volumes
-f, --force           force delete volumes even if they are attached to a server

๐Ÿ”’ Security considerations

When using caching volumes with self-hosted runners, it's important to understand the security implications:

  1. Volumes are shared between runners and jobs

    Caching volumes are persistent and may be reused by different runners across multiple jobs and workflows. As such, cached data is not isolated โ€” one job may read from or write to a cache created by another.

  2. Avoid using shared volumes with untrusted pull requests

    If your repository accepts contributions from external contributors, do not use shared caching volumes when running jobs triggered by untrusted pull requests. These jobs could potentially read or overwrite cached files, leading to information disclosure or cache poisoning.

    For guidance, refer to GitHub's official documentation: Security hardening for GitHub Actions โ€” Self-hosted runners.

Best practices:
  • Use separate cache volumes for trusted and untrusted workflows.
  • Avoid mixing trusted and untrusted jobs in the same workflow.
  • Apply strict job labels or filters to prevent cross-contamination between environments.
  • Regularly audit volume contents and permissions if caching sensitive dependencies.

๐Ÿ’ฐ Estimating costs

To estimate the monthly costs for maximum volume usage (in GB) for your caching setup, use the following formula:

max_volume_usage =
    peak_concurrent_jobs *
    total_volume_size_per_job_gb *
    num_locations *
    num_architectures *
    num_os_flavors

Where:

  • peak_concurrent_jobs: the highest number of jobs that may run at the same time
  • total_volume_size_per_job_gb: sum of all volume sizes (in GB) that a single job may require
  • num_locations: number of Hetzner regions used (e.g., fsn1, nbg1)
  • num_architectures: number of CPU architectures (e.g., x86, arm)
  • num_os_flavors: number of OS flavor + version combinations (e.g., Ubuntu 22.04, Debian 12)

For example, if you expect:

  • peak_concurrent_jobs = 10
  • total_volume_size_per_job_gb = 50
  • num_locations = 2
  • num_architectures = 2
  • num_os_flavors = 1

Then,

max_volume_usage = 10 x 50 x 2 x 2 x 1 = 2000 (GB).

This means the service may provision up to 2000 GB of volume storage to cover the worst case.

With the current cost of โ‚ฌ4.40 per month per 100 GB:

2000 / 100 x 4.40 = โ‚ฌ88.00/per month
โœ‹ Note: Refer to the official pricing page for the most up-to-date storage rates: https://www.hetzner.com/cloud/.

๐Ÿ’ธ Reminder: monitor costs and cleanup

While caching volumes greatly improve performance and pipeline stability, they also consume persistent storage that accumulates over time and directly contributes to your Hetzner Cloud costs.

To avoid unnecessary expenses:

  • ๐Ÿงน Regularly clean up unused or outdated volumes using the volumes delete command
  • ๐Ÿง  Design workflows with optional cache cleanup steps (e.g. deleting large temporary files)
  • ๐Ÿ“Š Monitor usage and volume count with volumes list and through the Hetzner Console
  • ๐Ÿ“ Set appropriate volume sizes instead of relying on the default (10GB)

You are responsible for volume lifecycle management. Unused volumes will remain active until manually removed.

โš ๏ธ **GitHub.com Fallback** โš ๏ธ