Buttons, End Game Screen, Tutorial User Testing - UQdeco2800/2022-studio-3 GitHub Wiki

Introduction

In this sprint, we verified functionality and evaluated art-style with the help of User Testing. We primarily focused on two methods: Think Aloud and Time On Task. The User Testing was carried out with 17 people, all of whom provided verbal consent (A/B User Testing). Moreover, we conducted a Heuristic Walkthrough with 3 design experts to identify design issues. Finally, we aimed to reach a broader audience using a TAM inspired form that showed static images of the game. This information will be used to improve the UI of the game in the future which will enhance the user experience.

User Testing Plan

The User Testing will be divided into three main parts:

  1. Primary User Testing: The primary method of user testing involved one on one interaction with potential users. We continued with our previously proposed target audience of users who play games more than once a month (and have never played this specific game) and were hence able to evaluate 17 users in total. In this case, there were overlaps with users who did not play games (1 of 17) and users who were in this studio (2 of 17), labelled our fringe users. We chose the Think Aloud and Time On Task methods of User Testing as these provided a good evaluation of UI faults and also helped identify any glaring design issues. To prevent bias, we split this user group into two parts of eight and nine, that is, 8 people were assigned to think aloud evaluation and 9 people were assigned to time on task evaluation. As a change from our previous method of user testing, we ensured that comparing times for time on task followed Fitt's Law and played close attention to a user's body language in time on task. The responses to the primary user testing helped uncover the main errors with the game functionality and was our main source of feedback from user testing.

  2. Secondary User Testing: The secondary user testing involved a heuristic walkthrough. In this method, we gave 3 design experts (well versed with Nielsen's Usability Heuristics) a set of tasks and asked them to evaluate the relevant game features as per the usability heuristics. This method was carried out for design heavy content which is, the buttons, end-game screen and certain aspects of the tutorial. This was a change from our previous method of user testing considering our previous method often resulted in design inconistencies and abnormalities. Moreover, we realised that meeting the heuristic criteria may have been easier said than done

  3. Tertiary User Testing: The tertiary method of user testing was a TAM inspired Google Survey that established the functionality being tested and asked the user to rate specific parts of the functionality. For specific comments regarding functionality, a text-box was accompanied with every User Testing section (Buttons, End-Game Screen, Tutorial). This ensured a more thorough understanding of our users, hopefully defining a more refined problem space. The aim of a form was to reach a broader audience beyond what we could cover with one on one testing. In this case, we received 15 responses. The tertiary user testing helped uncover more subtle, less glaring errors and acted as an extra source of feedback (that supported the primary and secondary feedback, to be evaluated with less rigour but with an equal amount of thought). The form received 5 responses.

Link to Primary User Testing: https://docs.google.com/document/d/1o9PqetDhU8wtvRP3TpvyCeMsLIt0R_d8mrdYRTHMbDc/edit?usp=sharing Link to Secondary User Testing: https://docs.google.com/document/d/1gv2W-FjQJdZoyN_dXfo4Y2TNI_ioeSjKMI3xH4k3BX0/edit?usp=sharing Link to Tertiary User Testing: https://docs.google.com/forms/d/e/1FAIpQLSfu7UgyeDST_l-fPU_KCiu6g8rQPYYkgk_4F6I8xbZUsKLQMQ/viewform?usp=sf_link

Main Points of Interest

Tutorial

The tutorial screen's main objective was to help a new user understand gameplay. In the previous sprint, the response to the tutrial was largely negative, partly because it wasn't fully implemented but also because it was a new, untested feature. Our main objectives the tutorial is to ensure the user understands the tutorial, feels ready to play the game and does not get distracted by the art style.

End Game Screen

Our main goals here was to verify the art style and ensure it fit in with the rest of game-play. We also wanted to ensure button positioning was intuitive.

Buttons

Following from User Testing last sprint, users wanted to see more dynamic buttons in the cutscenes instead of the default text buttons. Hence, we will include Image Buttons relevant to the cutscenes.

User Testing Results

Primary User Testing

Introduction

We gathered 17 responses, 8 in time on task and 9 in think aloud. We ensured the user group was a mix of people who enjoyed gaming and people who did not game often to remain as unbiased as possible.

Time On Task

(new) Deciding on a threshold time

In the previous sprint, the time on tasks' time was determined according to assumptions. This sprint we used Fitts' Law to decide on our time. Fitts' Law states that the amount of time required for a person to move a cursor to a target area is a function of the distance to the target divided by the size of the target. We will be checking the time taken by users against the time derived by Fitts Law.

Think Aloud

(new) Observing Users

In the previous sprint, our think aloud was limited to what the users said. We realised that this was not effective because a user's feelings were not fully visible to us (as the call was administrated via Discord). It was important to take into account the User's feelings while playing the game. Hence, we tried to test in person where possible and monitored video and audio for shifts in tone or mood closely.

Secondary User Testing

Tertiary User Testing