Game User Testing: Theme of Unit Selection & Spell System - UQdeco2800/2022-studio-3 GitHub Wiki
Overview
This area will outline our testing and development background, and describe the impact and changes to our development process from the previous 3 rounds of testing. It is expected to show how we continue to work with our concepts and proposals.
See the user testing plan directly, click here.
See the user testing results only, click here.
As a sign of the end of the development history of the game "Atlantis Sinking", this user testing will conduct an exhaustive test of all the developed functions at this stage as the end evaluation of the first game version.
Different from the user testing of the functions we were responsible for in previous development, this test temporarily abandoned the independence of a single test object, and used a "critical part of the whole game" as a test carrier to require users to take more attention on. Likewise, all our testing scenarios and goals are set up from previous development and testing experiences, which keeping the development process's theme from beginning to end.
Looking back, Team 1 insisted on qualitative evaluations at every stage of development(every sprints), but lacked effective analysis of quantitative data.
When the unit selection function was just implemented, user feedback led to the advent of the Semi-selection and highlight effect.
When this feature was basically developed, users let us realized that how we can give consistency to the game through this feature, which is also the focus of benefiting sprint 4.
During the development of this relatively new functional Spell system, we have also harvested some qualitative data worth reflecting on. This led us to think more about functionality... which is bound to be a huge help in design iteration in a development environment that is not limited by resources and time.
This final test will take into account the collection and evaluation of qualitative and quantitative data, and the needs of these data are derived from previous tester feedback and developers' self-awareness in the development experience.
Previous User Testing
This is a summary of the data obtained from previous evaluations, highlight points will shown in bold.
Sprint 1: Basic Unit Selection & Infomation Box
Sprint 1's user testing is a relatively imperfect, preliminary qualitative evaluation. Development at this stage is limited to basic unit selection and infobox display, with static dummy text continuing to work as part of the game's UI.
- At this stage, testers reported that the size of the infobox sometimes hindered normal game operation.
- At this stage, testers are influenced by bugs exhibited by the infobox, leading to bug-related biases in test results.
- At the same time, a small number of testers reported that the real-time feedback of unit selection did not meet expectations.
Sprint 2: Improvement of Unit-Selection Effect & Flexible Infobox Display
At Sprint 2, we found that the tester had the problem of "can't get used to the RTS game mode", and this problem originated from the misjudgment when setting the Testing audience.
-
Users generally recognized the content covered by this function, and gave a "simple and practical, high compatibility" comment on the system.
-
At the same time, some users found that the minimum range of the radio selection is usually larger than the collision box of the unit (this will cause the radio selection operation to select more than one unit), which is a non-obvious bug.
-
The tests at this stage are not fully represented due to the progress gaps between different teams, but we have assigned tasks and questions to the extent feasible for evaluation.
Sprint 3-1: Unit Selection's Improvement
We realized that we were too obsessed with asking users to do abstract tasks and asked relatively abstract questions as well at Sprint 3.
For our next test plan is to go back to the very basics of "how does it play out in the game", giving a lower threshold of understanding.
At the same time, we were confirmed that the system, or the interface between linked entities, has met the expectations of most users.
Sprint 3-2: Spell System
Spell's user testing uses the same audience as Unit Selection testing, but uses qualitative assessments as well. Although quantitative data were collected, further analysis of them was lacking.
At present, the Spell system's performance has not fully met the expectations of testers, and they believe that there is something missing (subsequent verification found that it was the sound effect).
At the same time, some testers pointed out that Spell's functional design is too simple, resulting in a lack of game experience.
Testing Plan
In this test, I will use two evaluation methods, User Testing and SUS Form.
User Testing will mainly collect qualitative data to confirm whether our development has met expectations or there is still room for improvement.
SUS will mainly collect quantitative data to confirm testers' satisfaction with internal features, and let us know which features need to be improved, or need further development more urgently.
To make the test results clearer, Unit Selection and Spell System will use separate test plans, but rely on the same testing methodology.
See the user testing results only, click here.
User Testing Plan
Unit Selection System
This evaluation will give the distribution of tasks based on the User Testing mode, use the unit Selection system we developed to interact with other components, and check the mutual compatibility of the interacted components.
During this period: operational differences between different user groups, designs that hinder the normal progress of the game, and whether users can get used to this selection system will be analyzed as qualitative data to confirm the problems, strengths and room for improvement of this system.
At the same time, the attention and time required to complete the task, the acceptance of the operation mode by different user groups, and the user's satisfaction with the built-in functions of the selection system will be used as quantitative data to measure the satisfaction of unit Selection development.
After a briefly introduction of the whole game, We will require user to finish the following tasks as the startpoint:
- Task 1: When the tutorial function is over, ask the tester to find the initial building with the highest health.
- Task 2: After Task 1, investigate any units that exist around this building and use friendly units in it to kill wandering enemies.
- Task 3: After Task 2, send a unit to fight the enemy heads-up.
- Task 4: Testers will find this difficult to win. Ask them to select multiple friendly units to attack at once.
- Task 5: After the battle, exit the game through various control buttons of the UI interface.
After the task, ask the following questions:
- Q1: During the testing process, what interactions occurred that caused your emotions to fluctuate?
- Q2: During testing, what impressed you/what made you think this is the creative aspect of the game?
- Q3: Does the selection and operation of other units during the test conform to your game habits?
- Q4: Do you think the game process is a little unclear so/difficult to find the next target?
- Q5: Do you think the display and distribution of information in this game is easy to consult and understand? If not, what makes it difficult for you to read?
Spell System
For the Spell system, we treat it as a standalone system. Since the systems with heavy dependencies on the spell system are "enemies", "maps", and "unit selection", we will use these elements as interaction points to start the test task.
Whether the Spell system has been fully developed, whether its strength is as expected, and which of its interactions are the most difficult for users to get used to will be used as qualitative data for the spell system to measure whether its development scope has reached the standard.
Correspondingly, the tester's accuracy of using spell to hit the enemy, the aggressiveness of using spell, and the use of skills and experience will be used as quantitative data to help us understand the position of the system in the user's mind and the expectations it is conceived by the user.
It should be noted that Unit Selection and Spell System's testing are carried out at the same time. In the documentation, it is split into two parts to explain the concept and purpose more clearly.
At the end of the Tutorial Function, when entering the game itself, the user is given the following operation tasks related to the Spell system:
- Task 1: Immediately after entering the game, release a lightning bolt Spell to hit the center of the map, no matter what is there.
- Task 2: Attempt to destroy tester's own "Library" building with any function, use whatever function is currently on the screen.
- Task 3: Users will find that Spell doesn't hurt friendly components. Makes users look for components that make Spell work.
- Task 4: After Task 3 is complete, ask the user to find a way to restore the chance to release Spell.
- Task 5: Finally, have the user use Spell once and have them cover as many targets as possible to get all enemy kills.
After completing these tasks, ask following questions:
- Q1: What is your expected spell function? Did the spell design of this game meet your expectations?
- Q2: How do you evaluate spell's performance and its capabilities?
- Q3: When you released spell, what affected your opertation and your mood?
- Q4: What do you think of spell's presence, frequency and importance in this game?
SUS Form Plan
After completing the interview portion of the user testing, users will be asked to complete the SUS form. The form will be given a series of evaluative questions on a five-point scale from 1 to 5, indicating the level of feedback and acceptance from "Very Dissatisfied(or Strongly disagree)" to "Very Satisfied(or Strongly agree)".
This usually locks some of the more obvious functions in the system, or evaluates it as an influencing factor such as the art design and sound effect of the gaming experience.
Unit Selection System
On a scale of 1 (strongly disagree) to 5 (strongly agree), ask testers about their personal experiences with the following questions:
- 1. I think there is good interaction between the various units of this game.
- 2. When I select any unit, their highlight effects and sound effects etc. will give me a very deep understanding of what is going on in this game.
- 3. The entry threshold for this game is very low, I can operate these units very proficiently, just like the games I have played before.
- 4. This game is very easy to operate, and there will not be too many selection/cancellation operations.
- 5. The operation of this game using the mouse makes it easy for me to accept it.
- 6. When I select a unit, the appearance of information values such as life and attack always allows me to know the information I want to know in the fastest time.
- 7. When I select multiple units, I think the process is quick and smooth.
- 8. When I operate multiple units at the same time, they always behave as I expect.
- 9. I think the framework of this game is highly consistent, and choosing different types of units doesn't make me feel weird.
Spell System
- 1. I think spell's performance is cool, and the game experience he brings makes me feel good.
- 2. I think spell is powerful and not overly off-balance, it's in the area where he should be.
- 3. I would be very much looking forward to using spell to attack enemies multiple times in the game.
- 4. I very much accept spell's release method. The fixed-point strike feels like a master.
- 5. I really enjoyed the scene where the enemy was hit by the spell.
- 6. I think spell's animation and sound effects should be in the background of Greek mythology (I think spell's design fits the theme of the game very well).
- 7. I think spell works well with other units, and I have never encountered a bug while operating Spell.
Target Audience
The target audience for this test should meet the following requirements:
- Growing up in the environment of the rise of Generation Z, with basic recognition and experience of games.
- Have experience in playing RTS games such as Red Alert, Clash of Clans, etc.
- Since Gen Z is a global trend, users can come from any country with basic conditions. *Users should have experience in PC-exclusive games and use the keyboard for basic operations.
So the audience will be like: 15-30 years old, gender is not limited. Grew up in an environment where economic conditions are acceptable, and has an interest and experience in PC games within using keyboard & mouse for operation. In particular, users should be expected to have played classic RTS games such as "Red Alert" and "Clash of Clans".
Testing Data
We used Google Forms to collect datas on internet. To let tester successfully running games via their local computer, we also introduced the programming development environment, Gradle and game-starting command.
The testing form can be accessed online: https://forms.gle/dyV4NU7PGaM9P3sw8
For faster testing period, we also choose some student under ITEE, who has experience building project and also suitable to be a person in our target auidence group. Test Result will shown:
Unit Selection's Answers:
Spell System's Answers:
Result Analysis
Unit Selection
In this game, as known from previous user testing, unit selection is not a prominent feature but a basic logic framework. Therefore, intuitive tests usually do not get the desired results, and this time I chose to implement the value test of unit selection through the interaction between different entities.
Other entities that are also used as game framework systems will directly affect the experience of using unit Selection. This includes but is not limited to the map itself, perspective cameras, weather system and battle systems.
Almost 70% of respondents admitted that they were affected by unexpected perspective shifts that affected the game experience, making them often choose a wrong target or mistakenly deselect a target. Another strong piece of evidence is pathfinding. If it unfortunately creates a "detour" situation, the user will have to repeatedly interact with the unit selection, which will also reduce their enjoyment of the game.
Entities as independent unit will not directly affect the experience of using unit selection, but rather the process of unit selection affects their functionality. Independent unit include buildings, resources, friendly units, and enemies.
In this regard, the expected consistency of unit Selection has been reduced. As of sprint 3, both the friendly workers unit and the building will pop up an infobox after being selected with their information, pictures, etc. During this test, we used the main version that was developed in parallel by other teams - they iterated on the infobox display of the worker units, which caused about 20% of the respondents to express doubts about the consistency of the game.
Spell System
The result of some quantitative data is that players care more about the performance(animation,over-screen effect and the sound) of the spell, and it can be said that its function, potential bugs and even balance are not more important than the performance. Spell doesn't get a high score for animations and special effects. Conversely, testers were more often inattentive when answering the spell's function itself, and they always exceeded our stated spell release count during the test. Most testers will not be able to resist the habit of releasing each spell after cooling down.
It can be found from qualitative data that players' functional requirements for spell are diverse but easy to compromise. A small number of testers said that they would expect more completely different styles of animation design, and allow them to switch freely to improve the display effect of more spell systems.
Unexpectedly, because the current soldier in the game cannot cause effective damage to the enemy, spell became the only function in the game that has the ability to destroy the enemy during the testing process. This allowed the testing process to exceed our expectations as players and gameplay took a dimensional leap in their reliance on the spell system. In this development context, what we need to consider is how the spell system checks and balances the release frequency, power and resource constraints. More than 40% of the testers believed that the spell system in the current environment was released too slowly, and the enemy's attack speed was too fast, which caused the HP of friendly units to be reduced too quickly.
During the testing process, there was a bug that "even if the worker's health bar is emptied, he still does not die and continues to collect resources". However, the rapid reduction in blood volume was a problem that was first caught by the test subjects. They usually have more intense mood swings while waiting for the spell system to cool down.