A Use Case is defined as a written description of a series of tasks performed by a user. It begins with the user’s goal and ends with said goal being met. It outlines, from the user’s perspective, how the system should respond or react to certain requests and interactions.
The benefit of use cases is the added value of identifying how an application should behave and what could go wrong. They also provide a better understanding of the users’ goals, helping define the complexity of the application and the advantage of a better identification of requirement.
This document is the result of a series of conversations conducted with ARIA-AT contributors and stakeholders. It will serve as the foundation for defining the requirements and user interface for the project.
Aria AT Test Runner Use Cases
Initiating a test run
Actor
Test admin
Use Case Overview
After a test plan has been designed, reviewed and added to the system, the test admin can initiate a test run by selecting a test pattern, defining an assistive technology and browser and 2 testers to execute the test run
Trigger
A test developer has designed new tests and test plans that have been reviewed, approved by the assistive technology developers and added to the system.
Precondition
Test plans and tests went through a review process.
Basic flow
Create a test run suite
Description
This is the main success scenario. It describes the situation where only creating a test run suite and adding it to the testers queue is required.
1
Test admin navigates to the "Initiate Test Run" page from the main menu
2
Test admin selects a test design pattern from a list.
3
Test admin selects one assistive technology and a browser combination from two dropdown menus.
5
Test admin submits test run to the testers queue by clicking "Submit test run" button.
Alternative Flow 3A
Test admin selects more than one assistive technology and browser combinations
Description
This scenario describes the situation where test admin needs to add more than one assistive and browser combination
3A1
Test admin selects first assistive technology and browser combination from two dropdown menus.
3A2
Test admin clicks the "Add AT/Browser combination" button, below the first AT/browser combination selected, which displays two more dropdown menus to make another selection.
3A3
Test admin submits test run to the testers queue by clicking "Submit test run" button.
Alternative Flow 3B
Test admin assigns a test run to testers
Description
This scenario describes the situation where the test admin assigns a test run to two testers
3B1
Test admin selects an assistive technology and browser combination from two dropdown menus.
3B2
Test admin selects two testers from a list of people that meet the requirements for the assistive technology and browser combination previously selected
3B3
Test admin submits test run to the testers queue by clicking "Submit test run" button.
Initiating a Test Run: Test Design Pattern list
The "Initiate a Test Run" page contains a list of design patterns available for the test admin to initiate test runs. These design patterns contain different details depending on the state they are in.
States
New. When a new Test Design pattern has been added to the system. The label "New" is displayed next to its name.
Initiated. After a test admin has initiated a test run and submitted it to testers. The label "initiated" is displayed next to its name.
Draft Ready. When a complete or a partial test run has been successfully executed by the testers, the run is ready for review. The label "Draft Ready" is displayed next to its name.
Conflicting Results. Once a test run has been executed, all results with discrepancies are sent back to the test admin and the test design pattern displays only the tests that need to be executed again. The label "Conflicting Results" is displayed next to its name.
Details
Depending on its state, a test design pattern could contain the following details:
Test Design Pattern name
List of the individual tests contained by the design pattern
Name of testers expected to execute the test run
Assistive Technology and Browser Combination required
Executing a Test Run
Actor
Tester
Use Case Overview
Once a test run has been added to the tester's queue, the tester can execute it.
Trigger
The test admin has added a test run to the queue.
Precondition 1
Test runs have been prioritized.
Precondition 2
Test runs have been added to the tester's queue.
Basic flow
Execute test run
Description
This is the main success scenario. It describes the situation where only executing and submitting the tests in a test run is required.
1
Tester navigates to the "Execute Test Run" page from the main menu
2
Tester selects a test run from the queue.
3
Tester verifies that the version of the assistive technology being used exactly matches the version required and that it is running with a default configuration as specified on the "Test setup requirements" page of the wiki.
4
Tester is presented with the first incomplete test in the sequence of tests in the test run.
5
Tester reads the instructions.
6
Tester follows the steps to execute the test.
7
Tester reviews the results of the test by clicking the "review results" button.
8
Tester goes to the next test in the test run by clicking the "next test" button.
9
Tester submits test run results once they have reviewed the last test in the run by clicking the "submit results" button.
Alternative Flow 6A
Skipping a test
Description
This scenario describes the situation where, for whatever reason, a tester decides to skip a test in while executing a test run.
6A1
Tester follows the steps to execute the test.
6A2
Tester clicks the "Skip test" button and a modal is displayed. The modal contains information about the test being skipped.
6A3
Tester clicks the "Open Github Issue" button
6A4
Tester is taken to Github so they can create an issue.
6A6
Tester goes back to the application
6A7
Tester clicks "continue" button from the modal
6A8
Tester goes to the next test in the test run.
6A9
Tester submits test run results once they have reviewed the last test in the run by clicking the "submit results" button.
Alternative Flow 6B
Tester doesn’t have enough time to finish
Description
This scenario describes the situation where the tester, for whatever reason, doesn’t have enough time to finish executing a test run and pauses.
6B1
Tester follows the steps to execute the test
6B2
Tester needs to make a pause for whatever reason
6B3
Tester clicks the "save" button
6B4
Tester leaves the application
Alternative Flow 6C
Tester returns to the application to finish the execution of a test run
Description
This scenario describes the situation where the tester has returned to the application to finish the execution of a test run that is in progress.
6C1
Tester opens a test run that has been partially executed.
6C2
Tester is presented with the first incomplete test in the sequence of tests in the test run.
6C3
Tester submits test run results once they have reviewed the last test in the run by clicking the "submit results" button.
Executing a Test Run: Test Run list
The "Execute a Test Run" page contains a list of test runs available for the tester to execute. These test runs contain different details depending on the state they are in.
States
New. When a new test run has been added to the tester's queue and is ready to be executed. The label "new" is displayed next to its name.
In Progress. When a test run has been paused. The label "In Progress x of z" is displayed next to its name.
Submitted. When a complete or a partial test run has been successfully executed and submitted and the run is ready for review by the test admin. The label "Submitted" is displayed next to its name.
Conflicting Results. When a test run was executed and discrepancies have been found, the tests with discrepancies are sent back to the testers queue. The now partial test run displays only the tests that need to be executed again. The label "Conflicting Results" is displayed next to its name.
Details
Depending on its state, a test run could contain the following details:
Test run name
Name of testers expected to execute the test run
Assistive Technology and Browser Combination required
Reviewing and Publishing Test Run results
Actor
Test admin
Use Case Overview
Once at least two testers have executed a test run, the results of this one goes to a draft mode where the test admin reviews them and later publishes them.
Trigger
At least two testers have executed a given test run and its results are ready to be reviewed.
Precondition 1
At least two testers have executed the test run.
Precondition 2
The test results are in draft mode.
Basic flow
Review and publish test run results
Description
This is the main success scenario. It describes the situation where only minimal review and publishing the results of a test run is required.
1
Test admin navigates to the "Draft Test Run Results" page from the main menu
2
Test admin selects a draft test run from a list.
3
Test admin goes over the details and reviews the results.
4
Test admin publishes the results by clicking the "Publish Test Run" button.
Alternative flow 1A
Test Admin wants to know more details about an executed test
Description
This scenario describes the situation a test admin wants to know more details about an executed test.
3A1
Test admin goes over the details and reviews the results.
3A2
Test admin clicks a test to display more details about its execution. A table is displayed right below the test
3A3
Test admin publishes the results by clicking the "Publish Test Run" button.
Draft Test Run Results: Test Run list
The "Draft Test Run Results" page contains a list of test runs grouped by test design patterns. These test runs could either be complete or partial runs. A label depending on the case is displayed for each test run.
States
Complete Run. When two testers have executed successfully all the tests within a test run. The label "Complete Run" is displayed next to its name.
Partial Run. When two testers have executed a test run and discrepancies have been found. This means one or more tests were sent back into the testers queue to be executed again and the test run then becomes partial. The label "Partial Run" is displayed next to its name.
Details
A test run contain the following details:
Test run name
Number of tests
Assistive technology and browser combination used for the execution
Draft Test Run Results: Test Run selected
Once a test run has been selected, the user is taken to the detail page of said test run where in addition to the details about the test run, they can see the results of the executed run
Details
The draft test run page contains the following details:
A link to the taste case
Name of the testers that executed the test run
Assistive technology and browser combination used for the execution