Acceptance Testing Strategy - bounswe/bounswe2026group9 GitHub Wiki
Acceptance Testing Strategy for Social Event Mapper
1. Purpose
The purpose of acceptance testing is to validate that Social Event Mapper satisfies the finalized project requirements and is ready for final delivery.
This strategy focuses on end-to-end scenario testing, not unit testing. Unit tests verify isolated functions or modules, while acceptance tests verify complete user-facing workflows across frontend, backend, database, authentication, notifications, and system behavior.
Acceptance testing answers:
Did we build the right system for the users?
This is mainly a validation activity, because it checks whether the delivered system meets user needs and finalized requirements.
2. Final Scope Verdict
2.1 Implemented and Stable
The following areas are considered implemented and stable, and will be covered by acceptance/regression testing:
- Event creation and management
- Event lifecycle: draft, published, updated, cancelled, ended
- Event visibility: public/private
- Guest access control
- Authentication and roles
- Comments
- Ratings
- Bookmarks
- Going status
- Capacity enforcement
- Host profiles
- Notifications
- Multi-location storage
- Equipment requirements
- Categories
- Past events and archiving
- Backend discovery features on the
feat/advanced-discovery-filtersbranch - HTTPS enforcement
- Private event backend enforcement
- Mobile-friendly UI
- No permanent GPS coordinate persistence
2.2 Partial or Pending Final Verification
The following items are partially complete or implemented but still require final acceptance verification:
| Requirement / Feature | Current Status | Acceptance Testing Focus |
|---|---|---|
| FRS-8.2 Ordered itinerary segments | Open issue #149 |
Verify backend contract and UI behavior after implementation |
| Discovery UI consumption on web | In progress, issue #156 |
Verify advanced filters are usable from web UI |
| Discovery UI consumption on mobile | In progress, issue #159 |
Verify advanced filters are usable from mobile UI |
| NFR-01 Search response time | Implemented but not measured at scale | Verify search results return within 2 seconds |
| NFR-05 Low server error rate | Implemented but not measured | Verify HTTP 5xx rate stays below 1% |
| NFR-06 10,000 event dataset support | Implemented but not measured | Verify search still meets performance target |
| NFR-07 Minimal logging | Pending hardening issue #152 |
Verify event actions and errors are logged |
2.3 Suspended and Out of Final Scope
The following features are removed from final implementation scope and will not be acceptance-tested as delivered functionality:
- Reporting events and host profiles
- Admin tooling
- Admin moderation workflows
- Automated CI testing on web and Android
- AI voice agent
- Online meeting events
- Peer-to-peer messaging
These items should be documented in the final report as scope cuts.
3. Testing Approach
Acceptance testing will follow a:
- Black-box approach: Tests are based on visible system behavior.
- Requirements-based approach: Each test is linked to finalized FRU, FRS, or NFR items.
- Scenario-driven approach: Tests are written around realistic user workflows.
- End-to-end approach: Tests cover complete flows across UI, backend, database, and notifications.
- Expected-vs-actual approach: Each test compares expected behavior with actual observed behavior.
Backend-level checks may be used when the UI cannot fully prove a requirement, especially for:
- Private event access control
- Authentication enforcement
- No GPS persistence
- Logging
- Performance
- Search ranking and filtering correctness
4. User Roles
Acceptance testing will cover the following roles:
| Role | Description |
|---|---|
| Guest | Unregistered user with limited browsing access |
| Registered User | Authenticated user who can interact with events |
| Host | Registered user who creates and manages events |
Admin is not included because admin features have been removed from final scope.
5. Test Environment
Acceptance tests should be executed in an environment close to production.
| Environment Item | Expected Configuration |
|---|---|
| Backend | Deployed FastAPI backend |
| Frontend | Deployed Next.js web frontend |
| Mobile | Android emulator, APK, or physical device |
| Database | Seeded test database |
| Browser | Latest Chrome and Firefox |
| Mobile View | Common mobile widths such as 375px and 390px |
| Protocol | HTTPS |
| Dataset | Normal dataset plus 10,000-event performance dataset |
| Network | Normal connection, with selected slower-network observations |
6. Test Implementation Tools
Acceptance tests will be implemented using different tools depending on the target platform and requirement type.
| Test Area | Tool | Purpose |
|---|---|---|
| Web end-to-end acceptance tests | Playwright | Automate browser-based user workflows such as login, event creation, discovery, bookmarking, Going status, comments, and notifications |
| Backend/API acceptance checks | Pytest with FastAPI TestClient or HTTP client | Verify backend-enforced behavior such as validation, authentication, private event access control, and recommendation privacy |
| Performance and load tests | k6 or Locust | Verify search response time, 10,000-event dataset behavior, and server error rate |
| Mobile end-to-end acceptance tests | Maestro | Automate native Android user workflows such as login, discovery, event details, bookmarking, Going status, notifications, and mobile-specific navigation |
| Mobile responsive web checks | Playwright device emulation | Verify that the web frontend works correctly on mobile screen sizes |
| Logging and reliability checks | Backend logs, Pytest results, and performance test reports | Verify structured logging, request timing, and HTTP 5xx rate |
Playwright will be the primary tool for web-based end-to-end acceptance testing. It will simulate real user workflows through the browser and compare expected behavior with actual behavior.
Maestro will be used for native Android end-to-end acceptance testing. It will automate mobile user flows directly on an emulator or physical Android device, including authentication, discovery, event interaction, notifications, and navigation.
Pytest will be used for backend acceptance checks where UI testing alone is not sufficient, especially for access control, privacy, validation, and API-level behavior.
Performance requirements will be verified using k6 or Locust because they require measurable response times and load conditions.
Playwright may also be used for mobile viewport testing of the web frontend, but native Android acceptance testing will be handled with Maestro.
7. Acceptance Criteria
A feature is accepted if:
- The user can complete the intended scenario successfully.
- Actual behavior matches expected behavior.
- The test can be traced to finalized requirements.
- Access control is enforced in both UI and backend.
- Invalid actions are blocked with clear feedback.
- Notifications are delivered when required.
- Private or restricted information is not leaked.
- Critical flows do not produce unexpected server errors.
- Performance thresholds are verified where required.
- Suspended features are not presented as delivered functionality.
8. Requirement Traceability
| Requirement Area | Covered Features |
|---|---|
| FRU-1 / FRS-2 | Event creation, validation, lifecycle, editing, cancellation |
| FRU-2 / FRS-4 | Discovery, map/list views, filtering, sorting, ranking |
| FRU-3 / FRS-5 / FRS-6 | Bookmarking, Going, comments, ratings, host profiles |
| FRU-4 / FRS-3 | Guest restrictions, private event access control |
| FRU-5 / FRS-6 | Host profiles |
| FRU-6 / FRS-7 | Notifications and event updates |
| FRS-8 | Multi-location, itinerary, equipment |
| FRS-10 | Past events and archiving |
| NFR-01 | Search response time |
| NFR-03 | HTTPS |
| NFR-04 | Private event backend protection |
| NFR-05 | Low server error rate |
| NFR-06 | 10,000 event dataset support |
| NFR-07 | Minimal logging |
| NFR-08 | Mobile-friendly UI |
| NFR-09 | No permanent GPS persistence |
FRS-9.1 and FRS-9.2 reporting are excluded because reporting and admin tooling are suspended from final scope.
9. Test Case Documentation Format
Each acceptance test case should include:
| Field | Description |
|---|---|
| Test Case ID | Unique ID such as TC-ACC-EVENT-01 |
| Title | Short name of the test |
| Priority | Critical, High, Medium, or Low |
| Related Requirements | Requirement IDs covered by the test |
| Preconditions | Required system state before execution |
| Test Data | Exact test values |
| Test Steps | Ordered execution steps |
| Expected Result | Correct system behavior |
| Actual Result | Observed result during execution |
| Status | Pass or Fail |
| Defect ID | Related issue ID if failed |
| Notes | Extra observations |
10. Regression Testing Strategy
Regression testing will be performed after major changes, especially after:
- Merging advanced discovery filters
- Implementing itinerary segments
- Adding structured logging and request timing
- Adding recommendation features
- Adding QR attendance
- Adding post-event review
- Updating web/mobile discovery clients
The regression set should include:
- Register and login
- Create and publish event
- Edit event
- Cancel event
- Browse events in map/list view
- Apply discovery filters
- Open event detail
- Bookmark event
- Mark event as Going
- Enforce capacity
- Comment on event
- Rate host
- View host profile
- Receive update notification
- Receive cancellation notification
- Verify private event restriction
- Verify recommendation privacy
11. Defect Reporting Strategy
If an acceptance test fails, a defect report must be created.
A defect report should include:
- Bug title
- Related requirement ID
- Related test case ID
- Environment
- User role
- Preconditions
- Exact reproduction steps
- Test data
- Expected result
- Actual result
- Screenshot or recording if useful
- Severity
- Priority
- Status
Defects must be reproducible and traceable.
12. Acceptance Decision Criteria
The system can be accepted if:
- All critical acceptance tests pass.
- At least 90% of high-priority acceptance tests pass.
- Event creation, lifecycle, discovery, interaction, and notification flows work end-to-end.
- Guest, registered user, and host permissions are enforced.
- Private event details are protected at backend level.
- Capacity limits are enforced correctly.
- Discovery filters work on backend and are correctly consumed by web/mobile clients.
- Recommendation features do not expose private, cancelled, ended, or unauthorized events.
- QR attendance and post-event review flows work if included in final delivery.
- NFR-01, NFR-05, NFR-06, and NFR-07 are measured and verified.
- Suspended features are clearly documented as out of scope.
The system should be rejected or delayed if:
- Guests or unauthorized users can access restricted event details.
- Users can bypass authentication for protected actions.
- Event creation, discovery, or event detail pages do not work.
- Capacity enforcement fails.
- Cancellation/update notifications do not work.
- Recommendation features expose unauthorized information.
- Search performance does not meet the 2-second threshold.
- Unexpected server errors exceed acceptable limits.
- Critical workflows produce unrecoverable errors.