DEPRECATED: Packaging via compose templating - sonchang/cattle GitHub Wiki
We currently have 'Add Service'. I'm thinking moving forward, this can be 'Create Service' or 'Author Service'.
While we're configuring the service, for certain fields, the user can insert a macro instead of an actual value. These macros can be used in fields such as:
- service's labels
- command arguments
- port mappings
- docker image
- shared volumes
- health checks
- etc...
Also, the macro can either be scoped to the service or per instance. (We'll still need to figure out exactly how this will look like, especially for multi-value fields like port mappings or shared volumes).
We can have special macros where the values are supplied by Cattle rather than user entered. This is very useful for things like IP address for clustered applications. (Once again, the specific and syntax for this will have to be fleshed out)
Our existing .zip of docker+rancher compose .yml files can essentially be the "package" for an app.
In the UI
, when a user wants to start an app, the user can either manually enter in all the values to be substituted for the macros in the compose .yml template files. Or specify an external source of the config for the compose templates (we could potentially supply an XML or YML file format for the template values) and this file can be pulled down from a bunch of different file servers (AWS S3, saltstack file server, SFTP, etc...)
Using rancher-compose
, the user can manually enter the template values or potentially specify a source for the values. This allows many different options for scripting and automation.
One common configuration for clustered apps is to have all the hosts in that cluster listed out in the configuration file for each node.
For example, in a zookeeper cluster (quorum), each server may have a config file that looks like the following:
tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
Each server would also have a separate config file called "myid" to identify which server it is.
However, cattle schedules the containers within a service serially. The first server container won't know yet which host the 2nd or 3rd server containers will be started on. However, we might be able to solve this problem via DNS.
Potential compose.yml.ftl
file (no user variables yet; however, we could potentially replace 2181 with something like ${zookeeper_client_port} and have a separate file provide the metadata to describe ${zookeeper_client_port}. Below: nodes and node_id are auto-populated):
zk_cluster:
labels:
service: zookeeper
io.rancher.scheduler.affinity:container_label_ne: service=zookeeper
environment:
ZOO_KEEPER_CLIENT_PORT:2181
ZOO_KEEPER_SERVERS:<#list nodes as node>${node.server_name}:2888:3888<#if node_has_next>,</#if></#list>
ZOO_KEEPER_MYID:${node_id}
image:
sonchang/zookeeper
Docker images for services will have to just take configuration from labels. One possible mechanism is for the label to just specify where/how to pull down its configuration. I wonder whether we want to build a config server? Most likely not since I'm sure there are already a lot of existing solutions available.