CIN Assessment - Team-07-Looney/looney-general GitHub Wiki
Grading Criteria
1. Branching
Achieve manageability of your software project releases by choosing a branching model and corresponding workflow.
2. Pipelines/Actions
Design a deployment pipeline that runs a[n existing open source] software application and generates an automatic build.
3. Releasing
Proof your solution by performing a complete release from a change in code that generates corresponding executables executing all the steps of a release management cycle.
4. Testing
Guarantee software quality by enabling quality tools and executing unit tests.
1. Branching
1.1 Pull-requests are used to add new software to the main repository
2.1 A branching model exists that supports collaboration between team members
We're using slightly modified version of GitFlow - the same model, without having a release branch.
A new merge in main acts as a release.
Documentation on Branching model
Part of the tree
2.2 The branching model supports multiple feature development
Refer to the point above - point 2.1
2.3 The branching model supports multiple releases
Since main acts as a release, every time development is merged in main, the merge commit is tagged and marked as release.
Link to releases
2.4 The branching model supports a hotfix scenario (on released software)
3.1 Other branching models/workflows are useful as well but did not fit the teams way of working (elaborate on the differences)
Classic GitFlow
This model might be less suitable for your team as it introduces an additional layer of complexity with a dedicated release branch, which your current approach avoids.
GitHub Flow
Designed around the main branch. All changes start from the main branch in the form of feature branches and are merged back into main after code review. This means not having a distinct and clear indication of when a particular version of the software is officially released.
Trunk-Based Development
Short-lived branches (or directly commit) to a single branch called trunk, main, or master. The main advantage is rapid integration, but this model often relies on continuous integration and doesn't inherently define clear release points. A lot of merge issues as well
2. Pipelines/Actions
1.1 A suitable development pipeline is chosen for the application
1.2 The build process is automatically triggered on push
See above - 1.1 A suitable development pipeline is chosen for the application
2.1 Developers are informed when a build fails
2.2 The development pipeline produces working software
Building pipeline - ran every time there is a push
Release pipeline - publishing to dockerhub - ran every time there is a release (aka merge with main)
2.3 The operation of the pipeline is made clear by means of a UML sequence diagram
2.4 (Containerization is applied) The software code is packaged, tested and deployed with all the required dependencies e.g. using Docker
Refer to the .yml and the documentation
3.1 Environmental parameters are managed separately
Show that each container has a .env example and that unit testing sets the node env to testing for example Environment secrets
Environment settings
3. Releasing
1.1 The software is released to the end user
1.2 Every release is based on well formatted user stories
We have a list of issues for every sprint:
For every issue, we have a complete user story in the description:
1.3 A sprint planning is being held, and a sprint planning is made
At the beginning of every sprint (usually on Mondays), we held a sprint planning meeting where we would decide what user stories are going to be developed in upcoming sprint and assign them to each of us equally:
At the end of every sprint plan meeting, we would come up with a roadmap that looks like this (our plan for 2nd sprint):
1.4 The new software is smoke tested
At the end of each sprint, we made a release (see 1.1) and held a user testing session with different potential users, who were mainly HZ students from other studies. Every time we prepared a test plan, which we followed throughout testing. During these sessions, users would sometimes encounter minor bugs or issues that we thorougly documented and fixed in the following sprint:
2.1 A feature release has been realized
2.2 An overview exists of all released user stories (and in which release)
Every issues contains a user story in its description. Issues are linked to milestones per every sprint. At the end of every sprint - release is made. For, example here is the done list of issues for the sprint 3:
That means that these issues were included in sprint 3 release and user stories can be checked just by looking at the description of a particular issue:
2.3 The backlog contains more than 90 items
2.4 The new software is tested locally (system test)
On every pull request we specify testing path that should be followed by a person who will review this PR. For instance, here is a PR that introduces a new Feature - Map. A testing path in a form of task list was added and followed by the reviewer:
2.5 A release has been rolled-back to an earlier stable version
2.6 A hotfix release has been realized
2.7 Project documentation is updated for every release (to support onboarding a new developer/tester)
Every sprint we updated our GitHub Wiki with new information about important functionality and documentation about project itself (e.g. branching model)
2.8 The new software is tested with the end user (acceptance test)
As was mentioned in 1.4, we held a testing session every sprint with our end users. Besides finding out new bugs, we would ask testers if the application was appealing for them (from design perspective, functionality, overall feelings etc. Check the image from 1.4 for detailed feedback that we got).
3.1 A (release schedule) planning is made for the long run, co-ordinated with the end user and developers
A picture of full roadmap is too big, so it will be separated by sprints.
Sprint 1:
Sprint 2:
Sprint 3:
Sprint 3 release specifically:
4. Testing
1.1 Unit tests are executed locally inside of IDE
Example of executing unit tests of Users MS: