Requirements and Design Decisions - shabaj-ahmed/SAR_for_habit_formation GitHub Wiki
What requirements does the study need to satisfy to be able to complete the task?
- Review the design doc as most of the requirements should be in there
Utilising the results from our focus groups, we have condensed the analysis of the focus group discussion into a set of design guidelines for robot coaches:
- Adaptive coaching strategy: The robot must dynamically adjust its coaching strategies to the individual based on ongoing analysis of client feedback and progress to ensure interventions remain relevant and effective.
- Professional boundaries: The robot must maintain a clear distinction between coaching and therapy by avoiding psychological counselling and instead focusing on helping the client to achieve their behaviour change goal.
- Provide reminders and cues: The robot must be able to schedule and send timely reminders for daily tasks such as medication intake and exercise, using auditory or visual cues to prompt action.
- Track progress and provide feedback: The robot must have capabilities to track the client’s progress against their health goals.
- Motivate and encourage: The robot must offer adaptive motivational messages and rewards based on the client’s progress to encourage continuous improvement.
- Make the process fun and engaging: The robot should include gamification elements (e.g., scoring, challenges) to make the health routine engaging and fun, increasing a client's adherence to performing their chosen behaviour.
- Availability and accessibility: The robot should be accessible to clients and maintain proactive engagement to maximise engagement with the behaviour change intervention.
- Support for autonomy and personal growth: The robot must allow clients to set their own health goals and select preferred methods for achieving these goals, thereby supporting their autonomy and promoting personal growth.
- Ethical data handling: The robot must ensure that all personal data collected during interactions is securely stored and handled in compliance with legal regulations.
To support autonomous, real-world deployment in participants' homes, the system was built using off-the-shelf components for reliable performance and consistent operation over a 21-day study period. The setup consisted of the following six main components: a Vector 2.0 robot, a Raspberry Pi 5, a portable 4G Wi-Fi router, an external USB microphone, an external USB speaker, and a 5-inch touchscreen LCD. Figure 1 illustrates the component integration, with Figure 2 showing how participants where requested to position the hardware.
Figure 1: Hardware setup used for in-home deployment of the SAR system. The Raspberry Pi 5 served as the central controller, connected to a touchscreen (via the DSI connector), an external USB microphone, an external USB speaker, Vector Robot and a portable 4G Wi-Fi router, which established a local network for communicating with the Vector robot. All peripherals were powered via the Raspberry Pi's USB ports, and the system operated from a single USB-C 5A power supply.
Figure 2: This image shows how the hardware was positioned during the study. The robot is placed in front of the touchscreen, and the WiFi router is placed nearby.
The Digital Dream Labs Vector 2.0 was selected as the SAR for this study. This decision was guided by several key factors. First, Vector provides essential features such as expressive animations, head and body movement, and speech synthesis, all of which can be controlled via a publicly available SDK. Second, its small form factor, affordability, and standalone charging dock made it suitable for long-term use in home environments.
The Raspberry Pi 5 acted as the central processing unit for the system, managing robot behaviour control, dialogue logic, sending reminders, and speech processing.
The Raspberry Pi was powered by a 5A USB-C power supply. All connected peripheral devices, the robot, touchscreen, microphone, speaker, and Wi-Fi router were powered directly from the Raspberry Pi's USB ports, which support up to 1.6A of output. This design allowed the entire setup to operate from a single power source, reducing complexity and improving portability.
In addition to managing interaction logic, the Raspberry Pi handled continuous deployment: on startup, the device automatically checked a private GitHub repository for code updates, allowing changes to be deployed remotely without needing physical access to the participant's device.
A portable Wi-Fi router was used to create a local network to ensure consistent and secure communication between components. This router was equipped with a data-only SIM card, providing 4G mobile internet access. The local network was used to communicate between the Raspberry Pi and the Vector robot via the Vector SDK. The internet connection was used to enable cloud-based text-to-speech services and facilitate system updates.
This local, isolated network ensured system reliability without requiring participants to connect the setup to their home networks. This reduced potential security concerns, increased ease of installation, and ensured consistent internet access for cloud service use.
A 7-inch official Raspberry Pi touch display, connected via the Display Serial Interface (DSI) connector, was included for three purposes:
- Check-in initiation: Participants used the touchscreen to manually trigger daily check-in interactions.
- System configuration: The interface allowed participants to configure specific settings (reminder time, robot eye colour, volume and screen brightness).
- Data transparency: The screen provided a visual representation of logged responses and upcoming interaction schedules, allowing visibility into what the robot was tracking and when.
An external USB microphone was included to enable speech-based interaction. Since the Vector robot's onboard microphone was not accessible via its SDK, a separate microphone was required to capture participant responses. This was essential for the system's ability to support verbal input.
In the screen-only condition, an external speaker was connected to the Raspberry Pi to playback the system's speech outputs. This speaker was not used in the robot-assisted condition, as the Vector robot itself provided speech output. Including the speaker ensured that the auditory experience remained consistent across both conditions.