Linux local development platform - lago-morph/chiller GitHub Wiki
- the chiller repository is checked out into ~/chiller
- the environment is Linux
A python virtual environment using the venv module is created for each group. This remaps the commands python3 and pip (among others) to a wrapper in the virtual environment. Any Python packages installed are installed under the virtual environment. The virtual environments are stored in the directory ~/run/<envname>
, e.g., ~/run/chiller_api
for the chiller_api component. The examples below use chiller_api, but substitute the environment name for each.
To create venv:
mkdir ~/run/chiller_api
cd ~/run/chiller_api
python3 -m venv .venv
To activate the venv:
. ~/run/chiller_api/.venv/bin/activate
To deactivate, type in a shell deactivate
.
When I do this again I should use a descriptive name instead of ".venv" so that the way it modifies my shell prompt is more useful.
After activating an environment, install dependent packages. These should be stored in the relevant subdirectory in the "requirements.txt" and "test-requirements.txt" files. E.g., for chiller API:
. ~/run/chiller_api/.venv/bin/activate
pip3 install -r ~/chiller/api/requirements.txt
pip3 install -r ~/chiller/api/test-requirements.txt
Note that it is important to install pytest and tox through pip3 while in a virtual environment so that it will use the proper python3 path. Type which python3
while in a virtual environment to see what I mean. Pytest can be flaky, and may require multiple activate/deactivate cycles, plus possibly installing it via pip under the venv, to get it to pick up all the packages installed in the venv.
Set up github remote authentication. We can do this with access tokens or ssh keys. This will describe how to use fine-grained access tokens.
- Install the github CLI with
apt-get install -y gh
- Create a fine-grained access token and copy it into clipboard
- In Linux shell, type
gh auth login
. It will take you through an interactive process to add the token - In Linux shell, type
gh auth setup-git
Now you can use the token during HTTPS authentication (e.g., when typinggit push -u origin main
)
To run Swagger Editor on local windows machine:
docker run -p 80:8080 -e URL="https://raw.githubusercontent.com/lago-morph/chiller/main/chiller_api.yaml" swaggerapi/swagger-editor
Then it is accessible at http://localhost
Can also run this "detached" with the -d
argument. If you detach it, then it is killed by running docker ps
to find the container ID, then docker kill <container ID>
.
Code is generated from the OpenAPI specification chiller_api.yaml
in the root directory of the project. There are two scripts to do the generation, also in the root directory. They execute the code gen from a docker container. One should commit all changes to the affected part of the project before running the code generation, then only do a git add
on the parts that are desired, commit the changes, then do a git checkout --force
to overwrite files from the last commit back into the working directory.
Generating files will destructively over-write files in the working directory. Use caution and make sure everything valuable is committed via git before running.
To generate backend service files:
cd ~/chiller
. api_gen_cmd
And for the SDK:
cd ~/chiller
. sdk_gen_cmd
There are two unit test environments
- chiller_api - service development, local sqlite3 database
- chiller_frontend - front end pages, SDK calls patched out The chiller_sdk is not unit tested because all code is generated.
There are two integration test environments
- chiller_sdk - integration between generated SDK and chiller_api
- chiller_frontend - test of front end with live api back-end. Both scripted with pytest and accessible via browser
There is a single dockerized test environment.
The database is postgresql. It runs in a docker container.
Current procedures:
-
make start-postgres
starts postgres container with self-contained database (files go away when container stopped) -
make init-postgres
creates the tables required -
make stop-postgres
stops the postgres container (it will be automatically deleted)
To run the database on the local Linux host in the current directory, do the following. We will mount the directory ./pgdata in the container to hold the database. These instructions assume that the password is stored in an environment variable. It is never seen by the user. When deployed into k8s, it will generate a k8s secret. For testing, just remember to use a fresh database each time (or store the test database password somewhere). This will create a user postgres and a database postgres when run when pgdata is an empty directory. If pgdata is not empty, it is not clear if the POSTGRES_PASSWORD argument will do anything. I believe that is just when it is initializing a new database.
mkdir pgdata
export PGPASSWORD=`openssl rand -hex 24`
echo $PGPASSWORD > ~/pgpass
docker run --rm --name chiller-postgres -e POSTGRES_PASSWORD=$PGPASSWORD -e POSTGRES_DB=chiller -v $PWD/pgdata:/var/lib/postgresql/data -d -p 5432:5432 postgres:16.3-alpine
psql -h localhost -U postgres -d postgres
If psql
is not found, on ubuntu fix that with sudo apt install postgresql-client
.
When done, remember to remove the pgdata directory if just doing testing. If you want to connect to an existing database, don't pass in the POSTGRES_PASSWORD argument (but remember to save the value generated when the database being used was first created).
Current procedure:
-
make api-unit
runs the unit test on chiller_api
The chiller_api stub code was generated by swagger-codegen. It uses Flask as a base, with Connexion (2.x) as an extension layer on top of Flask. It is somewhat confusing how the application is invoked, and depends on the invocation method.
-
flask --app chiller_api run
uses the create_app method in chiller_api/__init__.py -
python3 -m chiller_api
uses the create_app method in chiller_api/__main__.py -
pytest
in the test directory uses the create_app method in class BaseTestCase in test/__init__.py
Since I have not messed with integration with a WSGI server (e.g., gunicorn), I will have to revisit this for later tests/deployment.
The chiller_api service uses a PostgreSQL database running in a docker container for its database (see above for procedure to run this container). When running in either of the first two methods above, it uses a database named "chiller". When running under pytest, a temporary database is created for each test case. At the end of each test the temporary database is dropped. There is a procedure (see below) to re-initialize this database. The init-db command is defined in the chiller_api/db/db.py file, and is registered as a flask command during the Flask/Connexion app creation process in chiller_api/__init__.py.
flowchart LR
classDef containerClass fill:#bfb
classDef dbClass fill:#8df
classDef userClass fill:#fcc
classDef ingressClass fill:#ffa
subgraph devpc["Developer PC"]
subgraph vmware["VMWare Workstation Pro"]
subgraph ubuntu["Ubuntu 24.04 LTS"]
direction LR
db[("`Postgres files
Docker volume mount`")]:::dbClass
subgraph docker["Docker"]
pgcontainer["`PostgreSQL
container`"]:::containerClass
end
subgraph shell1["Shell"]
subgraph python["pytest"]
pytest["runner"] -- find tests --> capi_unit["`test directory
unit test cases`"] -- call --> chiller_api["`Chiller API
code under test`"]
end
end
end
end
end
class ubuntu userClass
pgcontainer -- read/write --> db
chiller_api -- SQL port 5432 --> pgcontainer
. ~/run/chiller_api/.venv/bin/activate
export CHILLER_DB_PASSWORD=`cat ~/pgpass`
pip3 install -r ~/chiller/api/requirements.txt
pip3 install -r ~/chiller/api/test-requirements.txt
cd ~/chiller/api/chiller_api/test
pytest > pytest.out
cd ~/chiller/api
flask --app chiller_api init-db
Current procedure:
-
make api-package
builds the api package
cd ~/chiller/api
python3 -m build
Note - must also start up postgres container and set password environment variable
cd ~/test
python3 -m venv testpkg
. testpkg/bin/activate
export CHILLER_DB_PASSWORD=`cat ~/pgpass`
pip install ~/chiller/api/dist/chiller_api-0.0.1-py2.py3-none-any.whl
cd testpkg/lib/python3.12/site-packages/chiller_api/test
pytest
deactivate
cd ~/test
rm -rf testpkg
The chiller_frontend application is a copy of the Flask examples/tutorial directory then heavily modified. It is a pure Flask app, no Connexion, and no dependence on the OpenAPI specification for routing as used by chiller_api (through Connexion).
Unit testing is performed by patching the calls to the chiller_sdk calls login_user, create_user, list_movies, and add_movie.
The unit tests are modified versions of the tests defined in the Flask tutorial section on testing.
flowchart LR
classDef containerClass fill:#bfb
classDef dbClass fill:#8df
classDef userClass fill:#fcc
classDef ingressClass fill:#ffa
subgraph devpc["Developer PC"]
subgraph vmware["VMWare Workstation Pro"]
subgraph ubuntu["Ubuntu 24.04 LTS"]
direction LR
subgraph shell1["Shell"]
subgraph python1["pytest"]
pytest["runner"] -- find test cases --> frontend_int["`tests directory
unit test cases`"] --> chiller_fe["`Chiller frontend
code under test`"]
chiller_fe --> mt_sdk["`mock Chiller SDK
monkey-patched`"]
end
end
end
end
end
class ubuntu userClass
. ~/run/chiller_frontend/.venv/bin/activate
pip3 install -r ~/chiller/frontend/requirements.txt
pip3 install -r ~/chiller/frontend/test-requirements.txt
pip3 install -e ~/chiller/sdk
Current procedure:
-
make frontend-unit
runs the frontend unit test (venv does NOT have to be active)
cd ~/chiller/frontend/tests
pytest
Current procedure:
-
make frontend-package
builds the frontend package
cd ~/chiller/frontend
python3 -m build
Note, we run both unit and integration tests here as part of the test First, repeat the (section)[https://github.com/lago-morph/chiller/wiki/Linux-local-development-platform#test-install-package-2] on getting the api server running from an install package below. Make sure to have the postgres container running.
cd ~/test
python3 -m venv fe-run
. fe-run/bin/activate
pip install ~/chiller/sdk/dist/chiller_api_client-0.0.1-py2.py3-none-any.whl
pip install ~/chiller/frontend/dist/chiller_frontend-0.0.1-py2.py3-none-any.whl
tar -xzf ~/chiller/frontend/dist/chiller_frontend-0.0.1.tar.gz
pip install pytest
deactivate
. fe-run/bin/activate
pytest chiller_frontend-0.0.1/tests
CHILLER_HOST=127.0.0.1:5000 pytest chiller_frontend-0.0.1/integration_tests
The chiller_sdk code was generated by swagger-codegen. It also conveniently generates documentation in the form of .md files which provide sample code. The code does not use Flask. The source code has been left unmodified (so far). The tests in sdk/test consist of methods that only contain the pass
instruction, so are not useful.
The directory sdk/integration_test has been added. It contains basic integration tests to ensure that the API is working (at least for expected behavior) when communicating with a server over HTTP.
flowchart LR
classDef containerClass fill:#bfb
classDef dbClass fill:#8df
classDef userClass fill:#fcc
classDef ingressClass fill:#ffa
subgraph devpc["Developer PC"]
subgraph vmware["VMWare Workstation Pro"]
subgraph ubuntu["Ubuntu 24.04 LTS"]
direction LR
db[("`Postgres files
Docker volume mount`")]:::dbClass
subgraph docker["Docker"]
pgcontainer["`PostgreSQL
container`"]:::containerClass
end
subgraph shell1["Shell 1"]
subgraph python1["pytest"]
pytest["runner"] -- find test cases --> capi_unit["`integration_test directory
test cases`"] --> chiller_sdk["`Chiller SDK
generated code`"]
end
end
subgraph shell2["Shell 2"]
subgraph flask["flask dev server"]
chiller_api["`chiller_api
code under test`"]
end
end
end
end
end
class ubuntu userClass
pgcontainer -- read/write --> db
chiller_sdk -- HTTP --> flask
chiller_api -- SQL port 5432 --> pgcontainer
Ensure chiller_api virtual environment has been previously used so has all dependencies. Make sure postgres container running
. ~/run/chiller_api/.venv/bin/activate
cd ~/chiller/api
export CHILLER_DB_PASSWORD=`cat ~/pgpass`
python3 -m chiller_api
Note the IP address/port (should be 127.0.0.1:8080
)
Current procedure:
make api-package
make api-image
make start-postgres
make init-postgres
make start-api
-
make api-integration
runs the frontend unit test (venv does NOT have to be active)
Set up chiller_sdk virtual environment dependencies. Note the part of installing the chiller_api package needs to be changed to installing an actual package, rather than using the development code. In the past I have had it "lose" the chiller_sdk install within the venv, and I have to re-do the pip3 install -e ~/chiller/sdk
when I activate from a new shell. I have not further explored why this is or under what circumstances it happens.
Activate virtual environment and install dependencies
. ~/run/chiller_sdk/.venv/bin/activate
pip3 install -r ~/chiller/sdk/requirements.txt
pip3 install -r ~/chiller/sdk/test-requirements.txt
pip3 install -e ~/chiller/sdk
Run integration tests (sdk venv active). Replace the address in the CHILLER_HOST environment variable if it is different than the default.
cd ~/chiller/sdk/chiller_api/integration_test
CHILLER_HOST=127.0.0.1:8080 pytest
Current procedure:
make sdk-package
cd ~/chiller/sdk
python3 -m build
Testing consists of two parts, run in different shells. First we have to get an instance of the api server running, then we have to run the integration tests. Unlike the api server, the tests are not installed as part of the package.
First, get postgres container running.
Get api server running (shell 1)
cd ~/test
python3 -m venv run-api
. run-api/bin/activate
pip install gunicorn
pip install ~/chiller/api/dist/chiller_api-0.0.1-py2.py3-none-any.whl
export CHILLER_DB_PASSWORD=`cat ~/pgpass`
flask --app chiller_api init-db
gunicorn -w 4 'chiller_api:create_app()'
Note the output from the gunicorn command. It should say it is being served on 127.0.0.1:8000. Install chiller_api_client (shell 2)
cd ~/test
python3 -m venv run-sdk
. run-sdk/bin/activate
pip install ~/chiller/sdk/dist/chiller_api_client-0.0.1-py2.py3-none-any.whl
Extract and run the sdk integration test (in shell 2 with the run-sdk venv)
tar -xzf ~/chiller/sdk/dist/chiller_api_client-0.0.1.tar.gz
CHILLER_HOST=127.0.0.1:8000 pytest chiller_api_client/integration_test
Getting pytest to "pick up" the package install in the venv can be flaky. Things that inconsistently work (or don't work) is doing a pip install pytest in the venv after the package is installed, then deactivate, then activate the environment again. I haven't figured out a reproducible sequence yet for this.
flowchart LR
classDef containerClass fill:#bfb
classDef dbClass fill:#8df
classDef userClass fill:#fcc
classDef ingressClass fill:#ffa
subgraph devpc["Developer PC"]
subgraph vmware["VMWare Workstation Pro"]
subgraph ubuntu["Ubuntu 24.04 LTS"]
direction LR
db[("`Postgres files
Docker volume mount`")]:::dbClass
subgraph docker["Docker"]
pgcontainer["`PostgreSQL
container`"]:::containerClass
end
subgraph shell1["Shell 1"]
subgraph python1["pytest"]
pytest["runner"] -- find test cases -->
frontend_int["`integration_tests directory
test cases`"] --> chiller_fe["`Chiller frontend
code under test`"] -->
chiller_sdk["`Chiller SDK
generated code`"]
end
end
subgraph shell2["Shell 2"]
subgraph flask["flask dev server"]
chiller_api["chiller_api"]
end
end
end
end
end
class ubuntu userClass
pgcontainer -- read/write --> db
chiller_sdk -- HTTP --> flask
chiller_api -- SQL port 5432 --> pgcontainer
Current procedure:
make packages
make images
make start-images
make init-postgres
-
make frontend-integration
runs the frontend integration test (venv does NOT have to be active) make stop-images
First, get postgres container running.
Get api server running. See procedures under the SDK integration test above.
Note the output from the gunicorn command. It should say it is being served on 127.0.0.1:8000.
Set up venv as would do for the chiller_frontend unit tests.
Then (with venv from chiller_frontend unit tests):
CHILLER_HOST=127.0.0.1:8000 pytest ~/chiller/chiller_frontend/integration_tests
Current procedure:
make packages
make images
make start-images
make init-postgres
To run so that it is accessible via a web browser (assuming the package test venvs have been set up and have the packages installed)
Shell 1:
# make sure that pgdata does NOT exist already!
mkdir pgdata
openssl rand -hex 24 > ~/pgpass
export PGPASSWORD=`cat ~/pgpass`
docker run --rm --name chiller-postgres -e POSTGRES_PASSWORD=$PGPASSWORD -e POSTGRES_DB=chiller -v $PWD/pgdata:/var/lib/postgresql/data -d -p 5432:5432 postgres:16.3-alpine
Shell 2:
. test/api-run/bin/activate
export CHILLER_DB_PASSWORD=`cat ~/pgpass`
flask --app chiller_api init-db
gunicorn -w 4 --bind 127.0.0.1:8111 'chiller_api:create_app()'
Shell 3
. test/fe-run/bin/activate
export CHILLER_HOST=127.0.0.1:8111
gunicorn -w 4 --bind 127.0.0.1:8222 'chiller_frontend:create_app()'
The application will be available at http://127.0.0.1:8222/
We can test all three components in docker images.
flowchart LR
classDef containerClass fill:#bfb
classDef dbClass fill:#8df
classDef userClass fill:#fcc
subgraph devpc["Developer PC"]
browser["web browser"]
subgraph vmware["VMWare Workstation Pro"]
subgraph ubuntu["Ubuntu 24.04 LTS"]
direction LR
db[("`Postgres files
Docker volume mount`")]:::dbClass
subgraph docker["Docker"]
pgcontainer["`PostgreSQL
container`"]:::containerClass
subgraph api_container["gunicorn container"]
chiller_api["chiller_api"]
end
subgraph fe_container["gunicorn container"]
chiller_fe["chiller_frontend"] --> chiller_sdk["`Chiller SDK
generated code`"]
end
end
subgraph shell1["Shell 1"]
subgraph python1["pytest"]
pytest["runner"] -- find test cases --> frontend_br["`browser_tests directory
selenium test cases`"] --> chrome["chromium driver"]
end
end
end
end
end
class ubuntu userClass
class fe_container containerClass
class api_container containerClass
pgcontainer -- read/write --> db
browser -- HTTP port 8222 --> fe_container
chrome -- HTTP port 8222 --> fe_container
chiller_sdk -- HTTP port 80 --> api_container
chiller_api -- SQL port 5432 --> pgcontainer
Note, for this to work the .dockerignore files have to be edited to remove the "dist" directories under each component. First, ensure that the latest version of the install packages have been built.
Then build each of the docker images. api
cd ~/chiller/api
docker build -t chiller_api .
frontend
cd ~/chiller/frontend
docker build -t chiller_frontend .
Create the network
docker network create chiller-net
Start the containers
# make sure that pgdata does NOT exist already!
sudo rm -rf pgdata ~/pgpass
mkdir pgdata
openssl rand -hex 24 > ~/pgpass
export PGPASSWORD=`cat ~/pgpass`
# we expose port 5432 to allow use of psql from the host
docker run --rm --name chiller-postgres -e POSTGRES_PASSWORD=$PGPASSWORD -e POSTGRES_DB=chiller -v $PWD/pgdata:/var/lib/postgresql/data -d -p 5432:5432 --network chiller-net postgres:16.3-alpine
# initialize the database (will move this into an option in chiller_api container later)
# this assumes that this venv was used to test the most recent package install so it has the chiller_api package
. ~/test/api-run/bin/activate
export CHILLER_DB_PASSWORD=`cat ~/pgpass`
flask --app chiller_api init-db
deactivate
# we do not need to expose any ports locally for the api services - we can add "-p 8080:80" if we want to diagnose with curl from the local host
docker run --rm --name chiller-api -e CHILLER_DB_PASSWORD=`cat ~/pgpass` -e CHILLER_DB_HOST=chiller-postgres --network chiller-net -d chiller_api
# we expose port 8222 locally to view from a web browser (or curl)
docker run --rm --name chiller-frontend -e CHILLER_HOST=chiller-api --network chiller-net -p 8222:80 -d chiller_frontend
This is run using selenium.
Current procedure:
make packages
make images
make start-all
make init-postgres
-
make browser-test
runs the browser test make stop-images
flowchart LR
classDef containerClass fill:#bfb
classDef dbClass fill:#8df
classDef userClass fill:#fcc
classDef ingressClass fill:#ffa
subgraph devpc["Developer PC"]
subgraph vmware["VMWare Workstation Pro"]
subgraph ubuntu["Ubuntu 24.04 LTS"]
direction LR
db[("`Postgres files
Docker volume mount`")]:::dbClass
subgraph docker["Docker"]
pgcontainer["`PostgreSQL
container`"]:::containerClass
end
subgraph shell1["Shell 1"]
subgraph python1["pytest"]
pytest["runner"] -- find test cases --> frontend_br["`browser_tests directory
selenium test cases`"] --> chrome["chromium driver"]
end
end
subgraph shell2["Shell 2"]
subgraph flask_fe["flask dev server"]
chiller_fe["`chiller_frontend
code under test`"] --> chiller_sdk["`Chiller SDK
generated code`"]
end
end
subgraph shell3["Shell 3"]
subgraph flask_api["flask dev server"]
chiller_api["chiller_api"]
end
end
end
end
end
class ubuntu userClass
pgcontainer -- read/write --> db
chrome -- HTTP --> flask_fe
chiller_sdk -- HTTP --> flask_api
chiller_api -- SQL port 5432 --> pgcontainer