Software - makerforgetech/modular-biped GitHub Wiki
The main software for the Modular Biped Robot Project is written in Python and C++. The software is split into two main components: the Raspberry Pi software and the Arduino software. The Raspberry Pi software is responsible for high-level control of the robot, including interacting with the user. The Arduino software is responsible for low-level control of the robot, including controlling the servos balancing.
The software is designed to be modular, with each module responsible for a specific aspect of the robot's functionality. This allows for easy customization and extension of the robot's capabilities.
The following modules are currently available:
- Animation: Handles the animation of the robot, including walking, turning, and other movements.
- Braillespeak: Converts text to Braille and speaks it using a proprietary audio output using the onboard buzzer.
- Buzzer: Controls the buzzer for audio output. Includes the ability to play tones and melodies.
- ChatGPT: Uses the OpenAI GPT models to process text based on user input.
- Logging: Logs data to a file for debugging and analysis.
- Motion: Handles motion detection using an onboard microwave motion sensor.
- Neopixel: Controls the onboard Neopixel LEDs for visual feedback.
- Personality: Defines the robot's personality and responses to published topics.
- PiServo: Controls the servos connected to the Raspberry Pi.
- PiTemperature: Reads the temperature from the integrated temperature sensor on the Raspberry Pi.
- RTLSDR: Uses an RTL-SDR (software defined radio) dongle to receive and process radio signals from digital devices.
- Serial: Handles serial communication between the Raspberry Pi and Arduino.
- Servos: Controls the servos connected to the Arduino via the Raspberry Pi and the serial connection.
- SpeechInput: Handles speech input using the onboard microphone.
- Telegram: Integrates with the Telegram API to send and receive messages.
- Tracking: Uses computer vision to track objects and faces using the onboard camera.
- Translator: Translates text between languages using the Google Translate API.
- TTS: Converts text to speech using the onboard speaker.
- Viam: Uses the VIAM API to integrate Viam modules for additional functionality.
- Vision: Handles image processing and computer vision tasks using the onboard IMX500 Raspberry Pi AI camera.
There are also a number of legacy modules that have not been fully tested with the latest architecture, but could be integrated with some work from the community.
These are located in the archived
directory of the modules
folder.
- Linear Actuator: Controls a linear actuator for linear motion.
- Stepper: Controls a stepper motor for precise motion control.
- Chatbot: Uses a chatbot to interact with the user.
- GamePad: Interfaces with a gamepad controller for user input.
- Keyboard: Interfaces with a keyboard for user input.
- Vision (Coral): Uses the Coral USB accelerator for image processing. Includes tracking and object detection.
- Vision (OpenCV): Uses the OpenCV library for image processing. Includes tracking and object / face detection. Includes a Timelapse script.
- HotWord: Uses the Snowboy hotword detection engine for voice activation.
- Battery: Monitors the battery level of the robot via the Arduino and serial interface.
- Power: Controls the power to the robot's servos via a relay module. This was used in the original Archie design to save power when the servos were not in use.
- RGB: Controls an RGB LED for visual feedback.
For details on some of the hardware configurations needed to enable these modules, see the Software section.
Each module has its own configuration file located in the config
directory. These files define the parameters and settings for each module. The configuration files are written in YAML format and include the following sections:
-
<name>
: The name of the module. - enabled: Whether the module is enabled or not.
- path: The path to the module's Python file.
- config: The configuration parameters for the module. These are passed directly to the modules
__init__
method. - dependencies: The dependencies required by the module. These can be Python packages or unix dependencies. They are installed automatically by the
install.sh
script. Any additional config can also be defined here.
Here is an example configuration file for the Buzzer module with additional dependencies added to showcase the format:
buzzer:
enabled: true
path: "modules.audio.buzzer.Buzzer"
config:
pin: 27
name: 'buzzer'
dependencies:
python:
- gpiozero
- pypubsub
unix:
- alsa-utils
additional:
- 'https://example.com/install-guide.html'
Modules can be enabled or disabled by setting the enabled
parameter in the module's configuration file to true
or false
. This allows developers to customize the robot's functionality by enabling or disabling specific modules as needed.
Once a module has been enabled for the first time on a device, the install.sh
script should be run to install any dependencies required by the module. Pay attention to the output as there may be additional steps required to complete the installation.
Modules interact with each other primarily using the pubsub
module.
This module allows modules to publish and subscribe to events, enabling communication between different parts of the software. For example, the Motion
module can publish an event when motion is detected, and the Animation
module can subscribe to this event to trigger a specific animation.
When creating a new module, it is important to follow best practices to ensure compatibility and maintainability. Here are some tips to keep in mind:
A module checking the status of a sensor should not directly control a Neopixel. Although this makes sense on an individual basis, when scaling this approach to the size of the entire project it makes the codebase difficult to manage and debug. Instead, modules should publish events within their own scope of influence that other modules can subscribe to, and anything more specific should be managed via the main.py
file or an adjoining script to manage the behaviour of the robot.
This is a key principle of the Observer pattern, which is used in the pubsub
module.
The exception to this is when a module is specifically designed to control or receive input from another module, such as the Tracking module receiving input from the Vision module. In this case it is more efficient and logical to have the modules communicate directly. The link can be established after the instances are dynamically created in the main.py file.
Read more about this approach in the links below.
Each module should be responsible for a specific task or set of tasks and should not rely on other modules to function. This makes it easier to test and debug individual modules and ensures that changes to one module do not affect the functionality of others.
Each module should include detailed comments and documentation to explain its purpose, functionality, and configuration options. This makes it easier for other developers to understand and use the module and helps maintain the codebase over time. This approach has been leveraged in the project so that any documentation is included in the comments on the class and method definitions.
Each module should be thoroughly tested to ensure that it functions correctly and does not introduce bugs or errors into the software. This includes unit tests, integration tests, and end-to-end tests to verify the module's behavior in different scenarios.
By following these best practices, developers can create modular, maintainable, and extensible software for the Modular Biped Robot Project. This approach allows for easy customization and extension of the robot's capabilities and encourages collaboration and innovation within the community.
The python framework is designed to allow modules to be added as needed to extend and modify functionality and behaviours.
The modules
directory contains the current list of modules available on the framework.
A module contains the following:
- A class definition, defining the name of the module
- An
__init__
method to initialise the module on creation - A
loop
method typically used to trigger behaviour during the loop cycle (if applicable) - Module specific functions
Let's take a look at the Sensor example below. Comments have been added to explain the functionality.
import pigpio // Import the PiGPIO library for pin access
from pubsub import pub // Import the pubsub module to subscribe to the loop event
class Sensor:
def __init__(self, **kwargs):
self.pi = kwargs.get('pi', pigpio.pi()) // Create a reference to the PiGPIO instance
self.pin = kwargs.get('pin') // Define the pin the sensor is attached to. Imported from config yaml when the module is initialised
self.value = None // Store the current value of the sensor
pub.subscribe(self.loop, 'loop:1') // Subscribe to the loop published by main each cycle
def loop(self):
if self.read():
pub.sendMessage('motion') // If the sensor detects motion, send a message for other modules to act upon
def read(self):
self.value = self.pi.read(self.pin) // Read the value of the sensor's pin
return self.value
This shows the basic structure of the module.
- Create the python module
- Create a corresponding
.yaml
file in theconfig
directory. - Populate the file with your module configuration parameters
module_name:
enabled: true
path: "path.to.module.ModuleName"
config:
pin: 12 #This must be the name of the pin parameter if present
something_else: value
dependencies:
python:
- dependency1
- dependency2
unix:
- dependency3
additional:
- 'https://example.com/install-guide.html'
The config.py
script will import all yaml files in this directory and merge them into a single config, accessible via Config.get([module], [parameter])
. Ensure that your name does not conflict with other existing modules.
Secrets such as tokens or other sensitive information should not be stored within a module config file. Instead, they should be exported as environment variables and accessed via the os
module.
$ vim ~/.bashrc
export SECRET="my_secret"
$ source ~/.bashrc
import os
my_secret = os.getenv('SECRET')
This is currently in use in a number of modules to store API keys and other sensitive information.
There are several events that are fired periodically during the execution of the code.
-
loop
: published once every loop of themain.py
code. -
loop:1
: published every second -
loop:30
: published every 30 seconds -
loop:60
: published every 60 seconds -
loop:nightly
: published at a set time every night.
This varying frequency allows different behaviours to be scheduled with the appropriate frequency. For example a motion detector may need to execute each loop
or each second, but a battery monitor may only require execution every 60 seconds`. Similarly, a backup script may only execute every night, when there is less likely to be demand on the system.
You can read more about the modular configuration architecture on the MakerForge.tech website:
Dynamic Module Loading in Python
Both external and software integrated via the arduino serial connection.
The battery monitor uses analog input from pin A0 to read the voltage of the battery. This is then compared to a minimum value. If the value is lower than the minimum the arduino will send a signal to the pi to shut down. This reference value must be adjusted depending on your supply voltage and battery type.
A buzzer is connected to GPIO 27 to allow for tones to be played in absence of audio output (see Neopixel below). https://github.com/gumslone/raspi_buzzer_player.git
An RCWL-0516 microwave radar sensor is equipped on GPIO 13
GPIO 18, 19 and 20 allow stereo MEMS microphones as audio input
Mic 3V to Pi 3.3V
Mic GND to Pi GND
Mic SEL to Pi GND (this is used for channel selection, connect to either 3.3V or GND)
Mic BCLK to BCM 18 (pin 12)
Mic DOUT to BCM 20 (pin 38)
Mic LRCL to BCM 19 (pin 35)
https://learn.adafruit.com/adafruit-i2s-mems-microphone-breakout/raspberry-pi-wiring-test
cd ~
sudo pip3 install --upgrade adafruit-python-shell
wget https://raw.githubusercontent.com/adafruit/Raspberry-Pi-Installer-Scripts/master/i2smic.py
sudo python3 i2smic.py
arecord -l
arecord -D plughw:0 -c2 -r 48000 -f S32_LE -t wav -V stereo -v file_stereo.wav
Note: See below for additional configuration to support voice recognition
The original hotword detection utilised Snowboy, which is now discontinued. The files are still available in this repo in the archived
section:
hotword.py
contains the module from this framework.
snowboy
contains the original snowboy functionality.
snowboy/resources/models
contains the trained models that can be used as keywords. It may be possible to find more online.
A guide on training new models exists here
This module is not currently used in the project.
Speech recognition can be enabled using the SpeechInput module.
Ensure that the device_index
specified in modules/speechinput.py
matches your microphone.
See below for MEMS microphone configuration
In order to use the Raspberry Pi’s serial port, we need to disable getty (the program that displays login screen)
sudo raspi-config -> Interfacing Options -> Serial -> "Would you like a login shell to be accessible over serial" = 'No'. Restart
WS1820B support is included via the Pi GPIO pin 12. Unfortunately to support this you must disable audio on the Pi. (see the discussion). For this reason we support neopixels using an I2C to neopixel driver example
If you would like to use neopixels without the driver (and without audio output), set the pin in the config to 12
and set i2c
to False
. The application must also be executed with sudo
https://learn.adafruit.com/neopixels-on-raspberry-pi/python-usage
The live translation module translation.py
allows live translation of logs and TTS output via google translate.
Call translate.request(msg)
and optionally specify the source and destination languages.
The config/translation.yml
specifies the default languages.
Using a speaker module connected to the headphone output Example you can enable text-to-speech.
Enable the module then fire a tts
event with a text
text value. This will be output as audio. Live translate is also integrated with this feature.
pub.sendMessage('tts', text='This is a test')