Running GitHub CI locally with nektos act (Docker) in LXC - lmmx/devnotes GitHub Wiki
To debug your CI locally, it can help to containerise.
When you run an environment (usually for tests and builds) on a GitHub Actions CI, to check the effect of a change in some code (either the code being tested/built itself or the Action YAML that runs said code on a "runner" in the cloud) you need to push changes to GitHub (remote) and sit and watch the logs.
The alternative is to recreate that runner on your local machine, and rather than push changes, just test them out and then find what works without waiting for the web-based relay race to take place.
Basically, it means faster iteration, so it's worth the hassle of setting up. It was undeniably a hassle, so here are my notes on getting it working.
To keep everything nicely self-contained, I chose to put the act
tool in lxc
(the Linux Containers tool), which means there are two layers of containerisation
at play: the entire testing setup goes in LXC, and within the LXC container each
CI job spawns Docker containers. If anything goes wrong, you can just tear down
the entire lxc
container, or roll it back, and you know you're starting with a
clean 'guest' environment (unlike if you just started on your 'host' machine),
whose environment variables etc. are more predictable.
- Note: the benefit of doing this is solely for iterating on the YAML config
for GitHub Actions, stored in
.github/workflows/
in your repo. It's an essential part of the routine here that the containers themselves, which are spawned by this GitHub Actions YAML config, first pull in a fresh copy of the repo using git.- To be clear: the purpose of using
act
is not to iterate on your code base (any edits to which in the repo that aren't pushed to GitHub will not be run in the Docker containers in the "local runner"), but to iterate on the Actions YAML config (which is the only part of the repo thatact
uses).
- To be clear: the purpose of using
lxc launch images:ubuntu/20.04/cloud ubu20ci
lxc config set ubu20ci security.privileged true
lxc config set ubu20ci security.nesting true
lxc restart ubu20ci
lxc exec ubu20ci -- sudo --user ubuntu --login
To run the .github/workflow/
folder of your repo, you'll need
act
which relies on Docker,
so this'll involve putting Docker inside LXC (which is possible!)
- Any time you don't want to run a bunch of installations (especially for testing and experimenting) it's fine to just throw it inside a LXC container, which (being developed by Canonical) is very un-troublesome in my experience
First get Docker Engine for
the Ubuntu container (hint: you can copy all of the following block into a script
and run it with sh setup.sh
then delete with rm setup.sh
)
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
#VERSION_STRING=$(echo $(apt-cache madison docker-ce | head -1 | cut -d "|" -f 2))
VERSION_STRING="5:20.10.7~3-0~ubuntu-focal"
sudo apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io
At this point:
sudo docker info
should succeed- If you omitted the
security.nesting
/security.privileged
config at the beginning, thensudo docker run hello-world
will fail with the following error:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "proc" to rootfs at "/proc" caused: mount through procfd: permission denied: unknown.
ERRO[0003] error waiting for container: context canceled
- Ongoing thread on this bug (still updating in recent days) here
The current resolution is the one I gave above, to enable nesting and privileged containers, (as described here but adding the "raw.lxc" options will break it, don't follow that advice).
Optionally, snapshot the LXC container at this point (from a different terminal window, not the one running inside the container!)
- Note: "publish" refers to your local (on-disk) image index, not anywhere online
lxc snapshot ubu20ci dockerubu20ci
lxc publish ubu20ci/dockerubu20ci --alias ubu20_docker
-
To roll back to this snapshot, run the following before proceeding from the restore point in the guide above:
lxc stop ubu20ci lxc restore ubu20ci dockerubu20ci lxc start ubu20ci lxc exec ubu20ci -- sudo --user ubuntu --login
-
Note: if you ever snapshot and try to rollback but get the error:
Error: Snapshot "dockerubu20ci" cannot be restored due to subsequent snapshot(s). Set zfs.remove_snapshots to override
then run
lxc storage set default volume.zfs.remove_snapshots=true
before restoring
Now the LXC container image is snapshotted at this point, install the GitHub Actions workflow "local task runner"
curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
Now the LXC container is all set up, you can put your repo in:
# change to your info
USER_NAME="lmmx"
REPO_NAME="range-streams"
sudo apt-get install -y git
GIT_URL="https://github.com/$USER_NAME/$REPO_NAME" # SSH will not work without key setup
git clone "$GIT_URL"
cd "$REPO_NAME"
sudo act
- Note: I selected the default option "Medium" on the first run, no other configuration was required after this
This gave me various errors, which leads back to the following advice:
This action was written to run on the default Github containers which already have conda installed, and it uses this installation + the CONDA environment variable it set to activate conda.
Your workflow on the other hand is using a different container which doesn't have conda installed and the environment variable set, thus it fails to set the right path.
You might want to use setup-miniconda with
miniconda-version: "latest"
instead since this will download and install miniconda in your container.
You need to put miniconda in the Action, since it doesn't exist in the Docker image that act
is running.
The alternative being suggested above, setup-miniconda
, despite the name doesn't actually install
miniconda, only finds one on the system and uses it.
Since the Docker container being used locally to mimic the GitHub Actions runner's environment
doesn't come with miniconda
, you'll need to install miniconda in it "manually" (automated in
code of course).
- Note: your Docker container doesn't see the LXC container, so installing conda there makes no difference
The setup-miniconda
docs note:
This action sets up a Miniconda installation to use the Conda package and environment manager by either locating the Miniconda installation bundled with the available runners or by installing a specific Miniconda3 version. By default this action will also create a test environment.
Despite this claim, I never spotted the docs on how to "install a specific Miniconda3 version" (let me know if you did!) and just installed it with the standard Linux installer script.
The aforementioned list of pre-installed software for the Actions runners lists:
- Miniconda 4.9.2
- Python 3.8
- this is marked "latest" while 3.9 is marked "3.9" on the download page
To specify the path to the installation directory will require setting the CONDA
environment variable, and passing it through in the --env
flag (this can be passed
multiple times to create a stringArray
, i.e. list of strings). The code underneath
this process is
TypeScript
(NodeJS with types)
-env INSTALL_MINICONDA=$CONDA
is going to be the "gate" to a pre-setup-miniconda
step (so the Action can find the miniconda installation), which I've called "Install miniconda".
It goes in the YAML between the actions/checkout@v2
and conda-incubator/setup-miniconda@v2
tasks:
- uses: "actions/checkout@v2"
- name: "Setup miniconda"
uses: "conda-incubator/setup-miniconda@v2"
⇣
- uses: "actions/checkout@v2"
- name: "Install miniconda"
run: |
if [ ! -z $INSTALL_MINICONDA ](/lmmx/devnotes/wiki/-!--z-$INSTALL_MINICONDA-); then source tools/github/install_miniconda.sh; fi
- name: "Setup miniconda"
uses: "conda-incubator/setup-miniconda@v2"
The ! -z
means "is not empty", i.e. it checks if the INSTALL_MINICONDA
environment variable
was set (which will be done by the act --env
flag and will not be set in the GitHub runner).
If it is set, it'll call the script at tools/github/install_miniconda.sh
and downloads miniconda.
The install_miniconda.sh
script is the following:
#!/usr/bin/env bash
if [ -z $CONDA ](/lmmx/devnotes/wiki/--z-$CONDA-); then
echo "Error: no CONDA environment variable is set" 1>&2
exit 5
fi
MINICONDA_URL="https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh"
wget $MINICONDA_URL
MINICONDA_SH="Miniconda3-latest-Linux-x86_64.sh"
sudo bash $MINICONDA_SH -b -p $CONDA
rm $MINICONDA_SH # Clean up after installing
- Note the
conda init
is handled automatically (as long as you change the default shell, see below) - Note: I got inspiration for this approach from scikit-image
In all, the approach after setting up Docker was:
export CONDA=/usr/share/miniconda
curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
# change to your info
USER_NAME="lmmx"
REPO_NAME="range-streams"
sudo apt-get install -y git
GIT_URL="https://github.com/$USER_NAME/$REPO_NAME" # SSH will not work without key setup
git clone "$GIT_URL"
cd "$REPO_NAME"
sudo act --env CONDA=$CONDA --env INSTALL_MINICONDA=$CONDA
With this, given the right Actions YAML in the .github/workflows/
directory,
the environment will recreate what takes place on the runner.
Activating the environment
The installed conda environment
will not activate properly
by default, since the shell used in GitHub Actions is bash --noprofile
.
Bash shells do not use
~/.profile
or~/.bashrc
so these shells need to be explicitely declared asshell: bash -l {0}
on steps that need to be properly activated (or use a default shell). This is because bash shells are executed withbash --noprofile --norc -eo pipefail {0}
thus ignoring updated on bash profile files made byconda init bash
. See Github Actions Documentation and thread.
Instead, you want to change the default shell to bash -l {0}
, which will
avoid specifying it in every job.
- See the docs
The following should be added to the job(s) that you want to use miniconda:
defaults:
run:
shell: bash -l {0}
Further act tips
To run a particular job, add the -j
flag, e.g. for me -j "install-dev"
let me focus on just the dev packaging, but you may also have "tests".
- Hint: the titles are the first layer of indented entries below the "jobs:" level in the workflow YAML file.
See also
- Python in GitHub Actions by Hynek Schlawack (2020)