Testing Plan - benjaminsunliu/ConUMap GitHub Wiki

Unit Testing

We will provide the description of unit testing activities and the testing report on a sprint-by-sprint basis to maintain a timeline of how testing has evolved over the course of development.

Sprints 1 & 2

Unit testing was conducted using Jest. The tests cover the main interactive components of the app:

  • MapViewer - renders the map, markers, clusters and polygons and also handles interactions like pressing and dragging the map.
  • LocationModal - displayed when the location settings are not enabled
  • LocationButton - allows the user to center or request location
  • CampusToggle - switches between the SGW and Loyola campuses
  • BuildingInfoPopup - displays information on the selected building

The renderCluster function could not be unit-tested directly because it is defined within the MapViewer component and is not exported. The behaviour was verified manually and indirectly by testing the MapViewer component.

The total number of tests is shown below.

Screenshot 2026-02-09 at 2 37 24 PM

Sprint 3

We continued using Jest with Typescript for our unit testing. Testing was expanded to cover the components:

  • BuildingSelection - Allows the user to enter a start and end location for navigation
  • HighlightUserBuilding - Testing the specific functionality that highlight the building the user is currently in
  • GetBuildingPolygons - Tests a utility functions that queries our data to get the outlines of buildings

The total number of tests is shown below:

Sprint 4

The unit tests we added matched the components used in the newly-implemented features this sprint. They were:

  • building-info-popup.tsx - Popup showing building, info. Functionalities to set building as start and destination were added and tested
  • building-selection.tsx - Building selection for navigation were added and tested
  • routes-info-popup.tsx - Navigation popup for directions of different modes was added and tested
  • decodePolyline.ts - Polyline decoding was added to display the line on the map
  • directions.ts - Use Google Routes API V2 to get directions
  • map-viewer.tsx - Many functionalities relating to map start and destination markers, mode change nodes and polyline were added and tested

The total number of tests is shown below:

image

Sprint 5

Continuing with our standard testing philosophy, we added tests for every single feature added this sprint to achieve our coverage goals and high acceptance criteria for a feature to be considered as complete.

The components that were tested were:

  • Indoor navigation viwer - Our main component for viewing floors and navigating inside concordia buildings.
  • Shuttle integraton with our current map viewer - Thouroughly tested integration of the live shuttle data integration with the rest of our application.
  • Concordia API Connection - The connection to the Concordia API for getting classes for a particular student using their Concordia Credentials.
  • Calendar view - The main view for displaying all classes a particular is currently taking as well as the integration with navigating to the next class.

sprint 5 total tests

Sprint 6

We continued this sprint by testing the remaining features. Every single feature and test was autotmated.

The key components were:

  • Fastest path algorithm for indoor data - The fundamental path navigation feature for indoor maps
  • Floor & Room selection - viewing and selecting floors for users who want to see a specific rooom
  • Hybrid indoor and outdoor navigation - Key component that needed integration testing with all of our previous features
  • Outdoor & Indoor POI - all the points of interests that needed to be shown the users both inside and outside a building.

sprint 6 total tests

Test Code Coverage

As above, the code coverage report is delivered sprint-by-sprint.

Sprint 1 & 2

Line coverage:

Screenshot 2026-02-09 at 2 41 10 PM

Branch coverage:

Screenshot 2026-02-09 at 2 58 51 PM

Function coverage:

Screenshot 2026-02-09 at 2 53 21 PM

Sprint 3

Coverage Report:

Sprint 4

Coverage Report:

image image image
  • Statement Coverage: 94.38%
  • Branch Coverage: 83.8%
  • Function Coverage: 94.73%
  • Line Coverage: 94.6%
  • Total Number of Unit Tests: 173

Uncovered areas: mostly UI edge cases, map/platform-specific behavior, and external integrations

Sprint 5

Line, Branch & Function coverage

total coverage numbers

All files coverage

all file coverage

Sprint 6

Line, Branch & Function coverage

total coverage numbers

All files coverage

Line coverage

image image
  • Statement Coverage: 92.72%
  • Branch Coverage: 84.72%
  • Function Coverage: 93.66%
  • Line Coverage: 93.12%
  • Total Number of Unit Tests: 410

Uncovered areas: mostly UI edge cases, map/platform-specific behavior, and external integrations

Acceptance Testing

Acceptance testing is used to verify that implemented features meet the requirements defined by the Product Owner.

For each feature, an acceptance test issue is created and labelled as an acceptance test. Each issue clearly describes the user acceptance flow and acceptance criteria that must be satisfied for the feature to be considered complete.

Release 1 Acceptance Tests

The list of acceptance tests signed off by the product owner for Release 1 can be accessed through the following GitHub filter:

https://github.com/benjaminsunliu/ConUMap/issues?q=label%3A%22acceptance%20test%22+is%3Aclosed+milestone%3A%22Release%201%22

This link shows all issues that:

  • are labelled acceptance test

  • are closed

  • belong to the Release 1 milestone

Release 2 Acceptance Tests

The list of acceptance tests signed off by the product owner for Release 2 can be accessed through the following GitHub filter:

https://github.com/benjaminsunliu/ConUMap/issues?q=label%3A%22acceptance%20test%22+is%3Aclosed+milestone%3A%22Release%202%22

This link shows all issues that:

  • are labelled acceptance test

  • are closed

  • belong to the Release 2 milestone

Release 3 Acceptance Tests

The list of acceptance tests signed off by the product owner for Release 3 can be accessed through the following GitHub filter:

https://github.com/benjaminsunliu/ConUMap/issues?q=label%3A%22acceptance%20test%22+is%3Aclosed+milestone%3A%22Release%203%22

This link shows all issues that:

  • are labelled acceptance test

  • are closed

  • belong to the Release 3 milestone

End-2-End Tests

End-to-end testing was conducted using Maestro. The user acceptance flow and acceptance criteria are defined in the corresponding Acceptance Test issues. The recordings below show the execution of our end-to-end tests.

Running Maestro Tests / Automation Research

We have explored the following methods for running maestro tests:

  • Run Maestro locally on a developer machine:

    This is the simplest way of running the maestro tests. A developer runs the React Native app on an emulator or simulator and executes Maestro flows with the CLI. It is useful for initial experimentation, debugging and acceptable for our current requirements of just uploading videos of the tests, but it is not automated at the team level yet.

  • Run Maestro CLI inside GitHub Actions

    This is a more realistic next step if we want CI automation without adopting cloud infrastructure right away. In this setup, GitHub Actions would build the app, boot an emulator or simulator, then run maestro test as part of the pipeline and upload artifacts. This keeps everything inside our existing CI, but it also means that we have to manage device setup and the extra complexity that comes with mobile runners. Currently we have not found an acceptable way to do this.

Maestro Workflows

Maestro CLI

  • Use GitHub Actions to trigger Maestro Cloud

    A simpler option to automate our maestro tests would be to use Maestro Cloud. The workflow would still build the React Native app in GitHub Actions, but instead of running tests on self-managed emulators, it would upload the app and flows to Maestro Cloud for execution. The problem with this is that it seems to be paid. However, there is a free trial option that we can try just to show the automation.

Maestro Cloud alt text

Automated Maestro E2E Workflow

In our current setup, the end-to-end workflow begins when code is pushed to the repository, which triggers the GitHub Actions pipeline. The workflow builds the React Native application, boots an iOS simulator, installs the application, and executes the Maestro test suite automatically. During execution, a screen recording and detailed test reports are generated and uploaded as artifacts for review. This ensures that the user acceptance flow is validated consistently and that evidence of test execution is available for demonstration and debugging purposes. This automated pipeline reflects the intended continuous integration process for validating critical user flows and has actually shown to be helpful in catching bugs.

alt text alt text

The workflow is ran on push to master and when a PR is opened. This helps reduce the usage while still catching issues early.

During one pull request, the workflow failed upon opening, which revealed a hidden issue that had been introduced. This issue would likely have been much harder to detect through manual testing alone: https://github.com/benjaminsunliu/ConUMap/pull/191#pullrequestreview-4026328206

The main limitations of this workflow are that it runs only on iOS and requires a relatively long execution time (approximately 30 minutes per run). An android version of the workflow was tested and did work, but was not kept because it would've been too expensive to keep.

alt text

Epic 1

US1.01 Support both SGW and Loyola campus maps

AT1.01.01

AT1.01.01.-.SD.480p.mov

US1.02 Distinguish campus buildings from city buildings

AT1.02.01

AT1.02.01.mov

US1.03 Show my current building

AT1.03.01

AT1.03.01.mov

US1.04 Show pop-up additional building information

AT1.04.01

AT1.04.01.mov

Epic 2

US2.01 Select start and destination buildings

AT2.01.01

US2_AT1.mp4

US2.02 Use current building as start

AT2.02.01

AT2.02.01_video.mp4

US2.03 : Support SGW ↔ Loyola directions using Google API

AT2.03.01

AT2.3.1.-.SD.480p.mov
AT2.3.2.-.SD.480p.mov

US2.04 Support multiple transportation modes

AT2.04.01

US2_AT4.mp4

US2.05: Support Concordia Shuttle Service

AT2.05.01

AT2.05.-.SD.480p.mov

Epic 3

US3.01 : Connect to Concordia API

AT3.01.01

US3_AT1.mp4

US3.02 : Find the schedules of my courses and their classrooms

AT3.02.01

569263525-df824cbb-4f38-4e64-b89e-924cdaaf4c1c.mov

US3.03 : Generate directions to my next class (based on the current time)

AT3.03.01

569263588-698d6942-0f6b-4632-abeb-9c388fd9c9c2.mov

Epic 4

US4.01 : User can locate rooms on a specific floor)

AT4.01.01

US4-AT1.mp4

US4.02 : Show shortest indoor path

AT4.02.01

AT4.02.01.-.SD.480p.mov

US4.03 : Show directions for students with disabilities

AT4.03.01

US4-AT3.mp4

US4.04 : Highlight indoor points of interest

AT4.04.01

AT4.04.011.-.SD.480p.mov

US4.05 : Be able to show directions between rooms in different floors

AT4.05.01

AT4.05.-.SD.480p.mov

US4.06 : User can view directions between rooms in different buildings and campuses

AT4.06.01

US4-AT6.mp4

Epic 5

US5.01 : Show the outdoor points of interest within range

AT5.01.01

US5_AT1_video.mp4

US5.02 : Show directions to a selected outdoor point of interest

AT5.02.01

US5_AT2_video.mp4

Usability Testing

The three main types of usability testing under consideration are moderated, semi-moderated, and non-moderated. According to research, it has been shown that moderated usability testing, meaning usability testing conducted with a user in the presence of a moderator who notes down comments, pain points, and any other user interaction that is noticed, is generally regarded as the best way to determine if a product will be successful with future users. According to people in the industry, moderated usability testing is one of the strongest tools a UX Designer can have [1]. Unlike non-moderated testing, UX Designers can go deep into user research and get a glimpse of their thinking throughout each task. Non-moderated testing does have the advantage of being more scalable and easier to perform, but it doesn’t offer as many insights and makes it harder to adapt one’s product to the intended population [1]. Therefore, we think it is logical for us to plan for a moderated testing setting and find ways to do so.

During our last PO meeting, our Product Owner encouraged us to look into Online Usability Testing and explore the option. Hence, we have found multiple online tools that support online Usability Testing. Below are the best tools we have discovered (ranked best by popularity and positive reviews). Each will be briefly introduced, have its pros and cons presented, and lastly, an overall conclusion on the tool to be used will be made.

[1] ]B. Krawczyk, “Moderated usability testing: All you need to know - LogRocket Blog,” LogRocket Blog, Aug. 05, 2024. https://blog.logrocket.com/ux-design/moderated-usability-testing/

  • Lyssna
    image
    Lyssna is a popular usability testing tool used by startups. It supports moderated online usability testing. Besides Testing, it has a Recruitment feature, where testers are found and recruited on the site.
    image
    image

  • Maze
    image
    image
    Maze is another Research and Usability Platform, which, unlike Lyssna, provides services up-to-date with current technologies— namely with its Maze AI service. Similar to the previous platform, a free tier is indeed offered, yet it is even more restricted and doesn’t allow moderated usability testing (the latter is only available for Enterprise plans). One of its best marketed points is that it’s a rapid testing platform which supports various UI&UX tools such as Figma, Adobe, etc and provides Maze Experiments, survey-styled follow-ups created with the platform’s Maze AI.

image
image

  • UseBerry
    image
    UseBerry is a platform similar to Lyssna. A main difference it has is that it offers mostly unmoderated testing. Besides testing, the platform offers Five-Second-Tests, tests that last five seconds in order to better assess the tested product's visual clarity and the user’s intuition in using it.
    image
    image

Conclusion: Due to the nature of our product, that is, it being a campus navigation app, purely relying on a Figma clickable prototype for usability testing will not be truly representative of the experience and frustrations our users might experience while using ConUMaps.

Interestingly, companies like Apple have conducted active moderated usability testing with an early version of their Maps app, having a small set of testers test pre-planned paths on the app [2]. Due to its similarity to our app, that is, both utilize map navigation, it would be best to adopt a similar approach as opposed to testing static map frames created on Figma. Instead of playing pretend with a clickable prototype, it would greatly benefit our team to have users test our app at its earliest stage with ExpoGo, since this would help us track app crashes, bugs, and monitor the users’ navigation tasks and feedback before sending our product to production.

We will be performing Usability Tests in two waves: Exploratory Usability Testing and Full Usability Testing. In both, the moderator will start an instance and have the tester download the ExpoGo app and scan a QR code to begin testing.

[2] “Achieving Greater Accuracy for Map Software with In-market Raters,” Appen.com, 2020. https://www.appen.com/case-studies/greater-map-software-accuracy

--> We will conduct two waves of active guerilla-type semi-moderated and moderated testing respectively

Planning

We have the privilege to have access to @Hack CTF’s participant base of 1008 students from various universities across all provinces. Testing with such a user base is really interesting since the competition participants are required to navigate from one campus building to another, and some, not knowing how to navigate, could greatly benefit by using ConUMaps, actually testing it in real time to reach a goal. Missing out on such an opportunity would be disappointing. This is why we plan on having two separate sets of usability tests.
image

image
The following will be a semi-moderated usability test conducted during the @Hack CTF competition. Users will be encouraged to download ExpoGo, scan the QR code, and explore the app. They will be prompted for feedback on the layout and design, and will be encouraged to use it during the event. Each user will have to fill out a survey about their experience and will be given an email to reach out to if they notice any bugs. This preliminary usability testing will help us gather insight and help us improve and prepare our app for the full, thorough usability testing (2nd Wave).

*By filling out the survey, the testers will also inform us of their intent to participate in our second set of usability tests. This will help us recruit future testers and potential users. It is important to note that this first set of tests will target a larger audience, as opposed to the last set of usability tests, which would select a few participants.

User Group Characteristics- 1st Wave of testing

Students from Canadian universities (Concordia, UofT, etc) of different genders and ages (19-25). The latter are @Hack CTF attendees, based on past statistics, 80% of participants are Concordia University students. Nevertheless, we expect not all participants to be familiar with Concordia's campus as detailed in the Research section above.

image

This one is the rigorous part of our Usability Testing endeavors. Selected users from the first wave of tests will be scheduled for moderated tests in which they will be tasked with specific tasks to perform. The moderator will be taking note of how well the user performs the tasks independently as well as any remarks and observations. Multiple metrics will be analyzed such as heatmaps, error-rates, etc. It will be equally possible, to use the Maze platform in parallel to the tools used for the Second Wave due to project requirements-- this will also help us go into depth with our analysis and gain more insights on static UI designs as users may be focused with functionalities and tasks during the active guerilla tests we plan to conduct.

Later, all data will be mapped into Miro and analyzed for analytics. Some tasks the users will be prompted to perform will be as follows:

  • Switch from the Loyola campus to the SGW campus

  • Find the Hall Building

  • View the Hall Building opening hours

  • Get directions to the Hall building

  • Select the walking navigation option

  • Select the Concordia Shuttle navigation option

  • Select the Accessibility Navigation parameter

  • Start the navigation

  • Open the Class Schedule in the app

  • View the directions to your next class

  • Search for the following classroom: MB 3.210

  • View navigation routes

  • Select the Bus navigation option

  • Start the classroom navigation *at this stage, the moderator will follow the user and observe how they use the app to navigate indoors and outdoors

User Group Characteristics- 2nd Wave of testing

During the first wave of testing, participants filled out a survey where they indicated whether they were a Concordia student and if they wished to partake in further tests. For this stage of testing, a small sample of Concordia students (survey responders) will be selected. In-person meetings will later be scheduled for monitored usability testing.

Metrics

List the metrics that you will use along with a description/definition.

For the first wave of testing we will be collecting qualitative data about our application via a google forms survey.

For the second wave of testing (which will be moderated) we will collect the following metrics:

  • Measure the time it takes for a user to find directions
  • Measure the amount of misclicks and lags that happen while the user is navigating-> from Maze Recordings
  • Overall user satisfaction at each stage of the task at hand
  • Overall task success rate
  • Heatmaps
  • User’s ability to perform tasks autonomously (track the number of times they ask questions or require assistance– note down the question as well)

Overall, for both waves of active moderated guerilla Usability Testing conducted, the following tools will be used: Miro (moderator will note down observations), ExpoGo (to test the app), Google Forms (feedback and recruitment) *testing will happen in person

Test results- First Wave of Usability Testing (Conducted on March 7th)

Google Forms questions can be accessed here: https://docs.google.com/forms/d/1PR-ibX5Zxbr6jFmCiALPqGBr55MC-1-no5BHwbL7RQc/edit
Participants have been approached during @Hack's career fair. In total, 14 responses have been collected which is a good sample size for the First Wave of Usability. Below are the graphs and statistics obtained from a quick dry-run of the navigation functionalities:
image
image
image
image
image
image
image
image
image
image
image

Analysis and actions- First Wave of Usability Testing

As discussed above, 14 people have participated in the First Wave of Usability testing. This is a good sample size since not all participants will be contacted; indeed the survey is a Convenience-style survey where users can inform us of their interest to get involved in the second wave. More importantly, out of all participants from the First Wave, only Concordia Students who agree to be contacted will be brought in for the Second Wave of Tests. Later, in person meetings will be arranged to conduct the Second Wave with each selected participant.
What we can infer from the survey results:
A good portion of the responders (42.9%) were Concordia students familiar with the campus, the other 57.9% were outsiders unfamiliar with the campus. Indeed, conducting a provisionary First Wave of testing during a Hackathon proved to be a great idea since we received unbiased participants who can truly assess if our app is clear and makes the campus easier to navigate.
The search feature being the first thing the user sees on the home screen once they open the app, makes it obvious as to why 57.1% of users tried interacting with it first. We can infer that the refactoring of Feature 2 has proven itself successful since our goal was to prompt the user to search for a building/class to later receive navigation options.
Many responders found our app easy to use and intuitive (low ambiguity, high affordance). As a matter of fact, 85.7% of participants understood how to navigate the map right away. On a scale of 1 to 5 where 5 is the highest, 7 responders ranked our app a 4/5 for usability and 6 responders gave it a 5/5. Only one responder ranked it with a usability score of 3.
When it comes to improvements, the users would like to see navigations through the tunnel (some of Concordia's buildings are accessible underground, and it is quite a popular route for many students during the winter months). Such improvements could be addressed in later sprints.
Overall, 78.6% stated that they are highly likely to use ConUMaps to navigate on campus. A mere 14.3% disagreed and 7.1% were undecided. Based on such statistics, we can consider our app successful and desirable amongst the users.

Lastly, 6 participants agreed to be part of the second wave of testing. Having recorded their personal emails, we will reach out to each one of them to book in person appointments for the most rigorous round of Usability Testing, the Second Wave.

Second Wave of Usability Testing (Issues and Solutions identified)

https://docs.google.com/document/d/1vK3lP-W4dGi659AWoX4kTGoRxSnlMk_XsBdcSyc9utU/edit?tab=t.iic0m1129ujz

Following on the comments and recommendations given during the 3rd Deliverable, we have conducted a total of 4 different Usability Tests on top of testing done during Phase 1 and have collected Quantitative and Qualitative data. To add, in order to deliver more data for our last Deliverable, we have recruited a total of 25 participants for Usability Testing (Phase 2)

Overview of Issues identified for each Usability Test Conducted, followed by the Solutions: *more details about each test can be found below

Issues identified during the Preference Usability Test (floor layout design):

  1. Users preferred the detailed design, yet some wished to receive a larger font for the classrooms because they found the current one too small to read (written comments that were reported).
  2. Few misclicks occurred, however to further from the heat maps we noticed that some users tried interacting with the Instructions pop-up.

Solutions to the issues of the Preference Usability Test:

  1. Increase the font size for accessibility. Change Link
  2. The container which included the current step instructions was blended with the background and its highlighted border was removed in order to decintivise the user from thinking its a clickable button and to prevent unwanted interactions from the user and increase efficiency Change link | Image link

Issues recorded during A/B Testing (feature to view indoor floor plans: multiple buttons for each floor VS slider floor selector):

  1. The “Buttons design” (Design#1) had multiple UX issues, but due to it being the least efficient design (read the results section to see why), we will not be tackling its issues
  2. The slider design was harder to manipulate than we thought initially. It was extremely sensitive, and some users– even though the slider design performed best– wanted to see changes

Solutions:

  1. Not Applicable since we went with the efficient design instead (Design#2)
  2. Some users noted in comments that they liked the slider but wished to have a 3rd option to better select the floors-- a user reported that they had a hand injury and that the buttons design was horrific for them and that the slider was a better, yet they suggested buttons or a dropdown to navigate floors within the slider. For simplicity, we refactored the chosen design to contain buttons: Previous and Next Floor to ease floor navigation. This solution was implemented as part of the integration of indoor floor selection therefore the link shows the improvement as part of multiple other files. File link & PR summary link to look at the output UI

Issues found during the in-person guerrilla Usability Testing:

  1. The tests have gone smoothly; some users were even anticipating steps which showed how intuitive our design was. Users were filmed with permission such that we could rewatch the test to note down the time taken per task as well as notice discrepancies. Only one issue has been mentioned and recorded: outdoor POI markers were a bit too large and covered the building markers; users found it difficult to use.

Solution:

  1. Make the POI outdoor markers a bit smaller and have building markers overlay on top of them

Issues recorded during the Maze full-scale usability test: Since unrefactored designs shown previously have also been used, the comments above are equally applicable for this test. Hence, only new issues will be reported below as the previous ones have a set-out plan to be addressed.

  1. Some users seemed to take a lot of time to locate the indoor POI selector
  2. Users took longer to trigger the indoor search, some misclicked and were stuck in the floor viewer

Solutions:

  1. Set important POIs as visible by default and let the user use the selector to further enhance their visibility. Change Link
  2. This issue links back to the A/B Test -> by adding a navigator view within the floor plan viewer, users should locate the feature faster and with ease
image image

Preference Usability Test (Floor Layout Test)

image

Most participants were able to complete the usability test, with a 92.3% success rate. Only 2 individuals were not able to fully complete the tasks, hence why the drop-off rate is 7.7%. Due to our Preference Usability Test presented a design that did not perform well with the users, resulting in a 48.3% misclick rate. Watching over the user sessions, it can be noticed that most of these misclicks indeed came from Design#1. Naturally, interpreting such results leads us to implement the other design, which drastically overperformed Design#2.

image

Analyzing the results, we find that indeed, Design#1 had better user engagement, as we can see by the heatmaps showing that users selected the correct elements to interact with to complete their navigation task.

image

By observing user session recordings, we can find multiple misclicks of the user tapping on the classroom layout to obtain the missing information, even though such parts were not interactive. image

On average, it took users 66.7s to test both designs, meaning that for fairness, we can assume that each took on average 33.35s. A total usability score of 77 has been recorded due to the high misclick rate of the first design and the 7.7% drop-off.
image

Even though Design#1 was the most popular and desired one, it still had room for improvement. Following a PO Feedback session, the PO recommended that we remove the instructions component and keep it simple to a user navigation line only. According to user feedback, deleting such a component wouldn’t affect the users much since most participants commented on the layout and colours. The instructions panel did not receive much user traction, and if removed, shouldn’t cause a loss for our app’s UX.

A few user comments on Design#1
image

A/B Usability Testing (Accessing a Building’s Floor Plans)

To better decide on the design to implement for the Building Floor Plan Access, we have conducted A/B usability testing to assign test users on two different design paths to determine which one performs the best.

For both designs, users had to use a “View Floor Plans” button. Interestingly, implementing a “View Floor Plans” button in the information pop-up of a building has proven to be a great choice of implementation since all users selected the element with no fault at a 0% misclick rate.

Heatmap illustrating the influx of users properly interacting with the View Floor Plans button image
Comparing both results, the button layout and the slider layout, we noticed that the majority of users had a significantly more positive experience using the slider compared to the group who used the button layout to view all floor plans of a building. image
Although it has a success rate of 100%, implying that all users were able to understand and complete the assigned navigation task, it is noticeable that this design has a very high misclick rate of 66%. Moreover, it has obtained a calculated usability score of 73 with an average time of 20.7s to complete the task.
image
image
On the other hand, when it comes to the second design, the Slider, it has been recorded that users had an easier experience interacting with it. Indeed, this design obtained a higher usability score compared to the first design, that is, a total usability score of 94, and a total average time of roughly 50% of the time recorded for the button design (10.8s compared to 20.7s)
image
Observing such results, we infer that the Slider design is the one to be implemented. However, the latter has room for improvement, namely its 11% misclick rate. To add, certain users enjoyed the slider, but wished to have a faster way of navigating. Upon further consideration, implementing a next and previous floor button would increase efficiency. Indeed, instead of sliding a bar to select a floor, users would next and previous button design that would reduce the scrolling a user would need to do to select a floor.

Full-Scale Usability Testing with Maze (“exploration”)

During our First Phase of testing, we have encountered multiple issues with the platform: screen recording wouldn’t be enabled automatically for users testing on a mobile device and Maze doesn’t support app prototypes, meaning that with a website implementation, some users with smaller screens had a zoomed-out interface which increased misclicks and made navigation harder. A solution: we encouraged users to download the Maze Participate app in order to have screen recordings enabled for mobile testers.

The following test was semi-moderated and allowed users to explore our app more to complete tasks. As a result, we obtained heatmaps to better assess which elements were the most interactive as well as the total average time taken to complete the full-scale usability path to compare the average time progression. The average time taken to complete the task was 96.1seconds and the overall usability score was 80. image

In-Person Guerrilla Testing

Following our Preference Usability Test, we implemented Design#1 with a slight correction advised by our PO. As a result, it has been recorded that users took on average 24.7seconds, a 135% decrease compared to the initial time taken by users during the first set of testing (33.35). With fewer components to observe and interact with, users took less time to complete the task, increasing the overall efficiency of ConUMaps.

In accordance with our A/B Testing results, we implemented the Slider Design with a slight twist of having a drop-down menu to select the floors. Initially, it took users 10.8s to complete the task; with the latest design refactoring, it took users 8.5s, a 79% decrease, which improves our app usability and user experience.

Guerilla Testing Data: image image image image
The “dry-run” usability testing refers to our Phase 1 Usability testing. As discussed during the last deliverable, we have divided our Usability Testing plan into two phases. During the first phase, some participants had the chance to experiment with our early design prototype on Expo Go. The recorded average time has been unfortunately approximated, ergo we know for a fact that more than two minutes were taken per user. To make a proper estimation as engineers, we have set the average time to 120seconds in order to better illustrate the weaknesses of our early prototype. We could have not included such a data point, yet we believe that adding it better illustrates our app’s progression. Indeed, such an estimation is a direct correlation from our Deliverable 3 Usability Test being restricted to qualitative data only.

Performance Testing

Performance testing is used to evaluate the speed, responsiveness, and stability of the mobile application during user interactions such as building search, route generation, and navigation between campuses.

Tools Researched

  • React Native Performance Monitor
  • React Developer Tools
  • Firebase Performance Monitoring

Selected Tools

React Native Performance Monitor and React Developer Tools (Profiler)

These tools were selected because they provide real-time monitoring of UI performance and component rendering behaviour during user interactions.

Metrics Evaluated

  • Frame Rate (FPS)

    Measures the smoothness of UI updates by tracking how many frames are rendered per second. It displays the performance of both the JavaScript thread and the native UI thread.

  • React Component Rendering Time

    Measured using the React DevTools Profiler to analyze how long components take to render during user interactions.

  • Layout and Passive Effects Execution Time

    Evaluates the time spent executing layout effects and passive effects, which represent side effects triggered after the rendering process.

Screenshot 2026-03-08 at 8 02 34 PM

Building Selection Interaction

  • Render duration: 12.4ms
  • Layout effects duration: 0.1ms
  • Passive effects duration: 0.8ms

These results indicate efficient component rendering and responsive UI performance.

⚠️ **GitHub.com Fallback** ⚠️