QA testing Summary and results - JeanCarloLondo/SpectRA GitHub Wiki
Testing and Quality Assurance
The SpectRA mobile application underwent multiple testing stages to ensure stability, usability, and accurate AR-based building recognition. Testing was carried out in both manual and automated environments, complemented by usability studies and bug reporting procedures.
1. Manual Functional Tests
Manual testing was performed throughout development to validate that each User Story met its acceptance criteria. These tests focused on verifying the correct behavior of recognition, UI overlays, navigation, and content display where provided on the User Stories backlog.
Scope:
- Building recognition accuracy and performance in real-time camera input
- Display and responsiveness of building information overlays
- Role-based content filtering (Student, Staff, Visitor)
- Correct functioning of guest access and session expiration
- Smooth media loading (photos, videos) and proper linking to external resources
Devices Used:
- Android devices ranging from mid- to high-end (Android 10–13)
- Occasional iOS simulation for layout validation
Testing Approach:
- Exploratory testing to identify layout or logic inconsistencies
- Black-box testing based on the functional requirements
- Regression testing after UI or model integration changes
2. Automatic Software Testing
To maintain consistency and reduce manual workload, a set of automated tests were implemented directly in Unity Test Framework (based on NUnit). These focused on validating non-visual logic, ML model integration, and data integrity.
Frameworks & Tools Used:
- Unity Test Framework (UTF) for unit and play mode tests
- NUnit Assertions for validation of expected outputs
- Mock data testing for simulating camera input and ML recognition results
- CI integration with GitHub Actions for automated build and test execution
Examples of Automated Tests:
Model Recognition Test
- Validates that the ML model returns a label within the expected confidence range (>0.9).
Data Binding Test
- Confirms that building information retrieved from the dataset correctly updates the UI components.
Session Expiry Test
- Verifies that guest sessions are cleared when the application restarts.
Link Validation Test
- Ensures external URLs stored in the dataset are valid and reachable.
Performance Benchmark Test
- Automatically measures recognition and overlay loading time to ensure they remain below target thresholds.
Continuous Testing Pipeline:
- Every push to the main branch triggers Unity’s automated tests.
- Build logs and coverage reports are stored in /QA/auto-tests/results/.
- Failing tests automatically notify the team via GitHub Actions.
3. Bug Report
All discovered issues during testing were logged and tracked. Each bug entry includes the date, reporter, related User Story, severity, priority, and reproduction steps, following the standard QA format. Bugs were grouped according to their related User Story, enabling clear traceability between functional goals and reported issues. See full list here: SpectRA Bug Report Log
4. Usability Tests
After completing the MVP build, a usability test session was conducted with selected participants representing different roles (students, visitors, staff). The goal was to assess overall user satisfaction, ease of navigation, and visual clarity of the AR overlays.
Methodology:
Participants were given a guided scenario to follow (locate buildings, access information, use camera recognition). Observers recorded task completion time, interaction errors, and satisfaction ratings. A structured usability questionnaire (Google Form) was used at the end of the test to gather feedback.
Main Findings:
- Users appreciated the AR overlay design and simplicity.
- Early versions had minor visual misalignment and small UI elements, later corrected.
- Most participants described the experience as “smooth” and “intuitive.”
Result:
The usability evaluation confirmed that SpectRA achieved a functional and user-friendly MVP for campus navigation and information access.