Testing and Execution Environments - lago-morph/chiller GitHub Wiki
This is old and should be incorporated into the other, more applicable, wiki pages
Types of testing and the environments required
Each component should have a documented contract with each other component.
Definitions
- Artifact states
- Source - textual (typically) specification of the artifact that must be further processed before it can be executed
- Build - a ready-to use chunk of functionality that can be deployed and run as a single unit. These are packages or images - an individual executable program is in an intermediate state that is not tracked or stored
- Executing - A Build that has been deployed into an execution environment. E.g., a running program, a running virtual machine,
- Artifacts
- API - the rules for interacting with the application's services - in our case the API is a single yaml file in OpenAPI 3 format
- Service - software that exposes functions that can be invoked using the rules defined by the API
- SDK - language-specific library code to interface with services via the API. Specifically handles data conversion and communications
- Front-end - end-user application (in this case web pages) that use the SDK library to access services
- Database - source includes format definition (DDL), data load files, and other configuration information for databases
- Software - externally-developed software with locally-developed configuration and setup processes
- Databases are handled as a special case due to the typically-tight coupling between the data structure and the application)
- Execution environment (EE) - the hardware, operating system, storage, network infrastructure, security configuration in which the application executes. Source for this might include scripts, configuration file source information, standards contents, procedures, training material content, etc. Builds might include VM images or non-application container images, PDF file with training or documentation, configuration files in format needed to give to cloud provider
- Test types (primary)
- unit test - only execution of the component in question. May include one-way dependencies to other components that are assumed already tested (e.g., using the API, running under a deployed execution environment). Finding unit or integration test errors with outside dependencies is not an expected situation
- integration test - tests the interaction between two or more components. Focus is testing the communications between them, not re-doing unit tests. Tests are at the level of the defined contract between the components. Preferably done with as few components as possible at a time to get coverage on all contracts.
- deployment test - tests the process of going from build artifacts to executing artifacts using a given execution environment
- functional test - testing of application functionality as seen by users external to the application
- Other tests
- load test - testing the ability of the application to function under simulated high load
- chaos test - testing the resilience of the application as different parts fail and need to be restarted
- man-in-the-middle test - testing the ability of the application to defend against data security and availability threats from within - compromised services, operating systems, networks, storage, source code repository, artifact storage locations
API unit testing
The API should be run through a format checker to ensure it is syntactically valid
Service unit testing
Uses a unit testing framework to directly test service functions Optional: In the case of services that require a runtime environment (e.g., hosted in a web server, WSGI services) ensure that the development service runner can start up and expose a network interface for the service As large a subset as is practical of unit tests that only require externally-visible interfaces (to be used by SDK integration testing)
SDK unit testing
Test with simulated mock communications calls (e.g., mock out actual http request/responses over network) I am unsure how to do this.
front-end unit testing
The rest of my notes I haven't processed yet
dev or github single VM back-end unit testing (local db) - unit use pyunit with testing simulation
dev or github single VM front-end unit testing (api mocked) - unit use pyunit with testing simulation
dev or github single VM integration testing (fe, be, separate SQL, on VM) - end-to-end integration testing use built-in exec environment use separate process for DB (postgres)
single-vm github hosted mini-kube k8s integration testing - k8s deployment (helm charts, application secrets) gunicorn for flask, nginx for reverse proxy github actions variables for application secrets kapitan for configuration management (what alternatives are there? kpt?)
container images, vm images packer vagrant docker
EC2 unmanaged k8s deployment (terraform, external secrets (e.g., AWS), dynamic secrets - tokens/certs for authn) EKS - load testing - horizontal scaling EKS - chaos testing - resilience EKS - man-in-the-middle security testing (network policies, authz policies, TLS, service mesh) Open Policy Agent, Istio (maybe), Authnz (maybe)
EKS - poduction support testing - observability, logging, OAuth providers fluentd for logging Prometheus for observability
EKS - deployment testing - upgrade to live system, backups, recovery, rollback, batch maintenance (e.g., dealing with inactive users, archiving data) EKS - production (not necessarily "the" production, just one set up that way)