03 Design Process - phthisic/EATUP GitHub Wiki

Introduction

Food waste is an enduring topic, and it is also a wide-ranging issue. It covers many aspects, such as household waste, supermarket waste, and restaurant waste. We will identify the scope of food waste in tourism. According to research, we found that there are several reasons why food consumption happened during the journey is a considerable issue to figure out. Firstly, it can be seen that the inability to estimate the amount of food needed and the consequences of wasting food contributes greatly to consumer waste behaviour. Nearly 20% of waste is due to the meal’s unfamiliar amount (Wang, Filimonau, & Li, 2021). Secondly, tourists want to try out as many local meals as possible but maybe they ignore the taste that may not adapt to. Thirdly, they are unfamiliar with local meals in terms of ingredients and flavours. Fourthly, they will blindly follow recommendations from TV shows or friends. Therefore, the goal of the application is to use AR technology to assist the single tourist to reduce food consumption and achieve sustainable tourism on the basis of a satisfying travel experience.

We conducted a large number of user research to understand the users’ experiences and behaviours of food consumption during journeys. including interviews on people who work in a restaurant in the tourist site as well as diners in the restaurant and a 28-year-old Chinese girl who likes travelling and has been to many provinces in China and several countries to understand their behaviours when eating outside during journeys.
Meanwhile, we conducted observations in a restaurant located in a tourist attraction in order to learn about diners' behaviours in a particular scene. After that, in order to further understand the insights of this domain.
Also, we used a cultural probe in the form of a photo of the wasted food and a short diary recording the feelings about the meal or why the food was left.

The user research has been documented in the proposal and if you want to see more detailed analysis, please check these links:
Probe
Interview

Development

In this development process, we conducted two evaluation iterations.

Firstly, we invited potential users who had short journeys recently to test our paper low-fidelity prototype with main functions, such as ingredient lists and rate of wasted food, through interviews which can make us understand their views for our concept directly.

Secondly, we designed a mid-fidelity prototype with functions of comments and information displays. After that, we conducted Think Aloud with 4 people who travel a lot to quickly understand the thinking mode of the users and it can know how users understand the interface or information in the prototype.

User Testing

In order to find out if the main features of the prototype are effective to address the problem extracted from the previous research, we chose to use the Think Aloud method because it helps to understand users' thinking mode. We selected 4 participants who travel a lot as our participants. 3 tasks related to the core functions of the prototype are provided and The participants are encouraged to talk aloud what they are thinking when interacting with the prototype. We have also conducted a benchmark test completed by the team member who is familiar with the prototype. Users' time of completing each task is compared with the benchmark to analyze if they can understand and benefit from our concept so that they have a better understanding of food.

Tasks for users to complete:

  1. Scan the menu and view ar display and ingredients (including special ingredients)
  2. You are from City A. Check what have two people from the same place ordered
  3. Scan the empty plate after eating and leave a comment on the dish

Record the time of completing each task.

Benchmark Test
Benchmark test

Analysis of Evaluation

In general, we have got mainly positive feedback on task one from the Think Aloud method. Users are satisfied with the feature of scanning the menu and seeing directly what the dish looks like. They have the agreement that the AR menu is helpful for understanding the size and ingredients so that they can get rid of overorder and avoid mistakes before ordering. However, there was a comment saying that it was not much clear in the process of using the scan function. From the users' perspective, it's necessary to remind that it is the actual size of the dish shown after scanning the menu. As AR scanning is our main feature of the prototype, it shows the requirements of refining the details regarding AR display in order to make sure that users can gain help from our concept to reduce the likelihood of waste.

It can be seen from both the benchmark test and the comments from Think-Aloud that 3 out of 4 users found it difficult to understand the recommendation on the order page. The average time for completing task two is much higher than the benchmark time. It seemed that the prototype showed too much representation from the mobile aspect. The location-based recommendations we currently have are not satisfying enough because people have different tastes although they come from the same area. The social pattern here is that people are more likely to trust others' choice of food based on data similarity. So we think this is the direction worth to have further exploration in the future.

As for the comment page, users indicated that the proportion display of how much food had been wasted was useful to have a better understanding of food and reduce the possibility of waste next time. Nevertheless, the user also mentioned that the current display of wasted food percentage was a little bit confusing. It is expected that the picture of the dish can be shown in an outstanding position, similarly with the issue occurred in the ingredients part in the order page. By having a clear understanding of food, the users can have an idea of which dish they are commenting on so that it can make a deeper impression of what they eat and magnify the effect of reducing waste.

⚠️ **GitHub.com Fallback** ⚠️