Infrequently Asked Questions - ocd-scm/ocd-meta GitHub Wiki
ocd-environment-webhook couldn't release my stuff, how do I debug?
See debugging on the wiki.
How many git repos should we use?
Teams migrating from other source control management systems often to go with the "mega repo" model of one repo with may folders per deployment. This isn't natural for git. If you are applying release tags to what you release, and independently deploy a dozen different microservices from one repo, all you microservices will have all tags. This makes no sense and creates a lot of noise. In general, you should set up a git organisation that contains one git repo per deployable, and one git repo per shared library. This also works better with the git security model and access controls.
Next is the question of where you store the configuration that manages your environment. Distributing this across all the deployable repos makes for complex release management. The approach recommended with OCD is to set up one git repo per environment which is mapped to one Origin project which is one Kubernetes namespace. We can then think as an environment as "a deployable" which aligns to the spirit of "infrastructure-as-code".
Why not one repo for config and a branch per environment? Because configuration such as database passwords doesn't move between environments. So branches don't help with configuration management. Git security is per repo not per branch. OCD uses git secret
that is used to encrypt sensitive configuration and that applies per repo security. This means that OCD goes with a repo per environment which favours secuirty of the convience of being able to easily compare two environments with a single git command.
Where’s the OCD Jenkins/pipeline/alternative?
It’s the philosophy of OCD that there isn’t a single “CI+CD” that fits everyone. Using a “CI” that best integrates into your agile tools to track your work is optimal. Using a “CD” that runs from git and automates syncing k8s from git is optimal. You can mix and match and not have to compromise on a “one stop” solution that doesn’t quite fit exactly your ways of working.
OCD is about continuous delivery of code you tag in git. You are free to use whatever continuous build and test system you want with any branch and merge strategy you want. When you want to deploy then push a tag and OCD will do a release build.
Why no Dockerfile in your demo repositories?
Using a Dockerfile is an antipattern. It makes security patching framework code harder. With s2i
there is a clean separation of "the application code" and "the frameworks" that run it. Better yet you can bump the version of the frameworks that are running the code without making any changes to the application git repo.
By way of example at uniqkey.eu there is a slack reminder to check for new patched versions of the latest php and node.js s2i images. A slackbot sees the reminder and checks what's available against whats in the openshift cluster. It output to the slack channel the upgrade commands. By applying the update once all the applications that are built using that image are automatically rebuilt immediately. No coordination is needed to code review changing a Docker file in each repo.
For a full list of the advantages of s2i
builds over Dockerfile
builds see the upstream documentation
Given that using a Dockerfile is an anti-pattern why do some of your repos have them?
Wow you caught me. This is because I am building my images using a free build service and not in my own Origin cluster. It is on my to-do list to upgrade to s2i
everywhere once OCD has a first stable release.
s2i
what can I do?
Our continuous build service doesn’t support This might not matter depending on:
- Whether your continuous build service supports containers.
- Which languages and frameworks you use and how much they abstract from low-level details
If your build system can use containers then you can run the official s2i images that have the compile tools. Here is an example of running s2i
on ciricleci.com. It is hard to imagine a modern build tool that won't let you use docker to build your code.
If your build service really doesn’t do containers but you use a high-level language and cross-platform runtime APIs you can simply run your unit tests on the frameworks and VMs provided by your service. You are still going to want to debug locally in the final container particularly to debug how you collect env vars in your code. To do this you can:
- Run s2i locally to build and debug in the correct container on your workstation. Annoyingly it only want to build a git remote branch by default. You can can push to a personal repo or branch and run s2i with
--ref
naming your personal branch. As a hack I rename.git
and build “.” and it will build my local uncommitted files. - Expose your openshift registry to pull the latest release image and use docker volumes to mount your local build into it.
- Run minishift locally and build and test there. Personally, I think that every developer having to run minishift locally isn’t a great option as it’s a bit of a monster and easily broken due to conflicts with other tooling. Probably someone has to run it to bootstrap OCD but IMHO only that justifies the headache.