containers support - raeker/ARC-Wiki-Test GitHub Wiki
***Currently, Help Desk does not provide user support for containers. All tickets requiring support must go to their corresponding Unit Support.***
A container is somewhat similar to a compatibility layer, but a container also has limited access to system resources. A containerization system can run many containers on one computer and prevent them from impacting each others' performance, similar to a virtual private server.
In short, containerization is a way to make applications portable, like virtual machines. Containers are gaining popularity because they have better performance than virtual machines. VMs emulate an entire computer, whereas a container shares the kernel of the main system.
None of our clusters support Docker because Docker requires root access to add and remove containers.
If a user wants to use Docker on GL/LH/A2 you can send them this response:
Docker is not for use in high-performance computing systems as it is not as secure as needed. We offer a Singularity module for users to use. To use it you'll need to use the following command:
module load singularity/X.X.X
where X.X.X is the version you need to use. You can view the versions available on the cluster by running module spider singularity.
If you already have existing Docker containers you can use them in Singularity:
https://singularity.lbl.gov/docs-docker
That page will help guide you through using Docker containers with Singularity.
There are two options to create a container for use on Great Lakes and both would require you to do them on your local machine:
1) Create your own Docker image, copy it over to Great Lakes, and then run it via Singularity
2) Build a Singularity container directly, move it to Great Lakes, and then run it
Singularity is containerization software that does not require root access. Flux supports singularity.
Docker containers can be converted to Singularity, but nine times out of ten you'd be better off starting from scratch.
This is to sketch out two workflows for upgrading Singularity: one for a regular upgrade and another for security-related updates. These actions will be codified in a Targetprocess template.
- Team recognizes there is an upgrade that is desirable. This starts a new TP feature which will be based on this outline.
- Systems team builds and installs new Singularity version
- Software team adds Lmod support Singularity upgrade paths
- Are there documentation updates required?
- Announce update to Singularity to Unit Support.
- The software team sets new default version
- The software team immediately removes LMod for all at-risk Singularity versions.
- After 2 weeks (or other guarantee no code is using it), the systems team removes all at-risk Singularity binary versions.
- Announce update to Singularity to Unit Support w/ a 1-week deadline to verify/test.
- The software team sets new default version after 1-week
- Software team updates Lmod to only have N-2 versions
- After 2 weeks, systems team removes >= N-3 versions.