Friendly Units User Testing - UQdeco2800/2022-studio-3 GitHub Wiki

User Testing

Conducting user testing is imperative to designing any system, even if it is a game. Understanding the level of extent to which the UX design of the game satisfies the user will allow the designers and developers to make necessary changes which will aid the user in having a better experience. For this reason, user testing was conducted. The System Usability Scale (SUS) evaluation method was used.

In the SUS Evaluation method, a user is asked to respond to a set of questions with a number ranging from 1 to 5, with 1 being strongly disagree and 5 being strongly agree. The average results of the questions asked are as follows:

  1. I think that I would like to use this [feature] frequently. | 2.67
  2. I found the [feature] unnecessarily complex. | 2.00
  3. I thought the [feature] was easy to use. | 3.33
  4. I think that I would need the support of a technical person to be able to use this [feature]. | 2.67
  5. I found the various functions in this [feature] were well integrated. | 3.33
  6. I thought there was too much inconsistency in this [feature]. | 1.67
  7. I imagine that most people would learn to use this [feature] very quickly. | 3.67
  8. I found the [feature] very cumbersome to use. | 2.00
  9. I felt very confident using the [feature]. | 2.67
  10. I needed to learn a lot of things before I could get going with this [feature]. | 3.00

Furthermore, a box and whisker plot was also used to evaluate the collected data.

As seen in the data, the following observations can be derived.

  • The whisker plot for questions 2 and 8 is extremely thin, indicating users had similar opinions. Users were neutral in their opinion of the system being cumbersome and complex.
  • Question 6 has a high score and a medium sized box plot, indicating users found the system mostly consistent.
  • More research needs to be conducted for questions 4 and 10, since users had varied opinions.
  • The rest of the questions provided information that there is still room for improvement, considering that the score is less than 3.

Lastly, when calculating the SUS scores for each respondant, 2/3 of respondants fell under the marginal acceptability score while 1/3 of the participants fell under the not acceptable resion of the acceptability score.

Updated SUS

During the fourth sprint, the SUS evaluation was conducted again to get an updated analysis of the features implemented in the game. Details of how the evaluation was conducted and what questions were asked can be seen above, as in the previous SUS evaluation conducted in Sprint 3.

The following box plot visualises the data collected during this evaluation:

The box plot displays improvements compared to the previously conducted SUS evaluation.

Observations:

  • Question 9 has a very low score in general indicating that users do not feel condifent while using the system.
  • Most users felt similar about using the feature often, which is indicated by the thin box plot of question 1.
  • There were varied responses about question 6 as seen in the box plot.
  • There is also an increase in the average score for each response, indicating improvement in most features of the game.

In order to increase system consistency understand where the system is lacking, team 8 decided to conduct additional user testing.

Additional User Testing: Think Out Loud

Over the next sprint, additional user testing was conducted to receive user feedback on the use and implementation of friendly units in the game. Unlike the last sprint, this time, the use of Think Out Loud evaluation method was used. This was done to get a better qualitiatve analysis of the data, rather than a quantitave analysis which was obtained using the System Usability Scale.

During testing, each participant was asked to test a game without any prior knowledge of the same. Each participant was given a set of instructions which in turn would ask the participant to test a feature. The instructions are listed below:

  1. Use a friendly unit to attack an enemy unit.
  2. Use a forager to collect 3 trees.
  3. Use a miner to collect 3 stones.
  4. Initiate a spell.

The participants were then asked to "think out loud" and be vocal about their thought process while completing these tasks. The following observations were made from what the participants had to say:

  • "Oh, have I already selected it", said a participant in regards to selecting a friendly unit. "How do I know if it worked", said another participant, on being able to select the friendly unit. With this in mind, a feature to visualise selection of friendly units was implemented.

  • The implementation of pathfinding on soldier units will also be done in response to users saying they cannot move them. "Can I not move this", was stated by a participant.

  • "Do I keep doing this", said a participant when using spells, thus indicating the need for a cool down period.

As seen in the results of these user testing evaluations, there was a large problem relating to learnability and the playability of the game. For sprint 4, team 8's main focus is improving the game experience by addressing the problem that surfaced in the user testing. We have concluded that a large part of the problem can be fixed by making the combat system/units functional to enhance the user's game experience. Movement and pathfinding will be integrated with the friendly soldier units. Selections and attacking are functionalities that will be made visual and interactive so that there is visual feedback for the user. We have prescribed tasks for this sprint such that the user experience is enhanced.