The Twelve Factor Application - klagan/learning GitHub Wiki
The twelve-factor application is a methodology for building software as a service. The process was further revised in 2016 by Kevin Hoffman, who added three more tenets, but the title remained.

The idea here is to promote the idea that one team can manage and change one code base and only affect the one service.
The service should rely on itself and not rely on the environment for all dependent packages. A good example of this is a docker container which embeds all packages internally and solely relies on the host for docker hosting technology.
Configuration and credentials should reside in the environment. The environment should dictate - in a transparent way - how the code interacts.
Externalise your configuration and credentials from the service in a secure way like environment variables.
These are resources that the service requires but should be controlled by configuration. A database is an example of a backing service as is another service. BUt the location of the service should be controlled in configuration or some sort. This allows the service to respond to change in location of the dependency in realtime.
The build, release and run process should be treated as distinct stages. Issues that may arise from incorrect application of this tenet for the build stage is works on my machine issues.
Applications should run as a single stateless process. This is to help enable horizontal scalability and elasticity.
Be careful with concepts such as distributed cache. While is ideal to reduce back pressure, it can affect disposability if the initial loading of the cache takes a lot of time. It can also use inordinately large amounts of memory and affect the cost of the setup.
Have the host manage the port binding that is exposed to the consumer. Expose the interfaces and functions over a specific port
The motivation here is to support horizontal scalability which is cheaper and more versatile than vertical scalability e.g. horizontal scalability can span nodes where vertical scalability cannot.
The container should be quick to start up and graceful in its shutdown. This is to ensure that scale up can be achieved rapidly on demand and that on scale down can be achieved in a consistent way.
As production is the environment the service will function, all environments should resemble production as close as possible.
In the same way that every commit should be considered a release candidate, every environment should be able to react as if production.
Logs should be treated as an event stream - a sequence of events emitted from an application in time order. A cloud native application writes all of its logs to stdout and stderr. This enables the cloud provider to redirect this stream to any resource without affecting the service.
In this way, we can direct the stream to multiple different resources for different purposes e.g. analytics, monitoring, alerts, archival
Another advantage of this model is that we can elastically scale the service. Orchestration has the responsibility of managing the containers of work, but the containers themselves have no idea where they are being hosted. If the container was aware of this, then it would require a responsibility on the container to direct its output dependent on its location.
When faced with needing to add an administrative process one should consider whether the process could be avoided by re-architecting the service.
An example of this is an administrative task that is scheduled on a timer. This could be embedded in the service to run every day at a specific time. The issue with this is that if we scale the service up to ten instances, we now have the same administrative task running ten times which is not desirable.
We could instead host an administrative endpoint which could be secured and called on by an external timer managed by a separate operator. This would enable the service to sit behind a load balancer and still be run once.
Alternatively, we may be able to interact with the backing service directly e.g. housekeeping tasks directly on the database.
We should support the concept of autonomous services and teams by providing documented interfaces that other services can use to interact with.
API design and development is a disciplined process and promotes standards or conventions that can assist in standardising integration.
Telemetry supports the goal of observability and covers:
- application performance monitoring (apm)
- domain specific telemetry
- health and system logs
Application performance monitoring may tell an operator how many HTTP requests were made in the last minute. Domain specific telemetry may tell an operator how many items have been purchased in teh past day. Health and system logs may tell an operator how many restarts a service has made, how many times it scaled up or down etc.
Logs are more statements abut the lifecycle of the domain in a code context whereas telemetry is more about the operational metrics for the service.
We should ensure that all endpoints may only be accessed by authorised operators.