Development Concepts - mwgeurts/viewray_fielduniformity GitHub Wiki
The following sections describe the concepts applied to development of this software application. Many of these concepts have been adapted from Agile best practices. Each section briefly describes the development concept followed by how it was adjusted for this application's quality system.
Contents
- Use Cases & Requirements
- Test Driven Development
- Backlog & Sprint Planning
- Branching & Continuous Delivery
- Verification & Validation
Use Cases & Requirements
As presented in 21 CFR 820.30(f-g), this application was developed according to a specification of input software requirements. These requirements are listed under Requirements, and are grouped by interface, functional and non-functional (performance compatibility, documentation, etc.) requirements. To define these requirements, a series of use cases were developed to summarize the high-level goals of the application in a manner that is clear to the non-developer members of the design team.
To prevent scope creep and encourage code simplicity, each software requirement must be linked to one use case and step within the Use Case's course of events list. Requirements that cannot be traced to a user-centric use case are not allowed.
Development of use cases and requirements are the first step in the software development cycle.
Test Driven Development
In order to facilitate a short development cycle, the next step following requirements specification was to develop a series of unit tests. Each unit test is designed to test one or a small set of requirements (often one interface and one functional). The Traceability Matrix documents this relationship. Additional unit tests are conceived until all requirements are covered.
Next, for each unit test one or more positive and/or negative conditions was established to test the application in development and assert that it meets the referenced software requirements. Positive conditions represent tests that pass valid input data and verify that a valid result is returned, while negative conditions represent tests where either invalid data is provided and a valid result is not returned OR valid data is provided and invalid data is not returned. Where applicable, reference data may be established to allow comparison of the unit code's results to a priori expected results (that are independently validated outside of the application).
Each unit test, along with documentation, is written in the UnitTest()
function found within the software repository. Unit tests were written to be as independent as possible; however, to manage the execution time of the tests some tests do use data loaded from prior tests. In these situations, the unit test was designed to revert the data back to a standard state prior to starting the next test.
Once unit tests were established, the application was designed into code units to (where possible) be executed by a single unit test. Typically each code unit was organized into a function. In this manner, the unit tests largely drove the organization of the application code. The unit tests also helped specify the internal data structures and internal function interfaces.
Backlog & Sprint Planning
Once the application framework was outlined according to the unit tests (discussed above), the next step in the development cycle was to establish a backlog of development tasks to fill in each code unit. The Issues GitHub feature was used for this purpose. Each task was provided a summary, type (enhancement, bug, question, etc.), and assigned to a developer. Tasks were also assigned a relative size, or amount of time necessary to complete.
Next, tasks were assigned to a milestone, where each milestone represented a development sprint. The sprint durations were typically two to three weeks, although it depended largely on the availability of developers to contribute to this application. Sprint assignment was based on task priority, size, and the duration of the sprint.
Bugs identified during development that could not be fixed prior to the next merge (and unit test execution) were added as new tasks on the backlog and assigned to sprints. Users of this application are encouraged to place questions and/or bugs directly onto the Issues backlog.
Branching & Continuous Delivery
Due to the small size of this application, each sprint typically resulted in a new software release. This helped facilitate rapid software delivery to end users but required developers to minimize merging and testing overhead. To minimize merging, a continuous delivery approach was applied encouraging code to be checked in directly to the master branch. Branching was discouraged, with the exception of tasks that were considered largely separate from the rest of the application such that merging would be intangible. Code was checked in (or committed) at least daily, sometimes several times per day.
With each code commit, unit tests were run using the automated test harness UnitTestHarness()
. This function recursively executes all unit tests on the latest commit, then each previous software version for each test suite data set. The test results from the latest commit are compared to each prior version, and along with the performance result, cyclomatic complexity, and code coverage statistics, enable the developer to evaluate not only how the latest code compares to reference data, but if (and often just as important, where) it deviated from prior versions.
To help reduce bug introduction, code refactoring was separated (where possible) from the initial development commits. This allowed the initial feature and refactored unit test results to be separately evaluated and compared. Also, if bugs were identified during refactoring, it was easier to revert to a prior functional commit.
Verification & Validation
In addition to the periodic automated unit testing of committed code, at the end of each sprint the application went through verification and validation phases. During verification, the automated unit tests were repeated across multiple of host operating systems and MATLAB versions in addition to the primary test platform. Results from these permutations were compared to determine software compatibility. The periodic unit testing and compatibility testing together comprise the verification aspect of testing.
Validation testing was performed by packaging the software and testing it in a production environment. Each use case was evaluated by the referenced end users. The key distinction between this and verification testing was that verification tests were generally performed with a pre-defined set of input data by the developers (or automated test harness), whereas validation tests were performed using independent data sets by independent testers.