My First Robotics Project - EymoLabs/eymos GitHub Wiki
Welcome! In this guide, we’ll walk you through setting up your first robotics project with EymOS, step by step. Let’s get started.
👋 Hand Tracking System
We’re going to create a simple yet exciting project using EymOS. You’ll work with three services:
- Camera Service (preinstalled)
- WindowService (preinstalled)
- A custom service, HandTrackingService, which you’ll build.
🚀 What’s the goal?
We’ll use MediaPipe to track the position of your hand in real-time from a live camera feed, and display the results in a window using WindowService.
By the end, you’ll have a fully functional system combining computer vision and robotics—capable of real-time hand detection and visualization!
📂 Project Structure
Before we start coding, set up your project with this simple structure:
hand_tracking/
├── services/
│ └── hand_tracking.py # Custom HandDetectorService
├── main.py # Main project file
└── config.json # Configuration settings
🛠️ What each file does:
services/hand_tracking.py
: The customHandTrackingService
, where you’ll implement the hand detection logic using MediaPipe.main.py
: The entry point for your project, where you’ll initialize and run the services.config.json
: Configuration settings (e.g., camera parameters, detection thresholds).
Now that the structure is ready, let’s move on to writing the first lines of code!
🛠️ Setting up the Project
Let’s start by setting up the environment for our hand tracking project.
1. Create the project folder
First, create a folder where all your project files will live. You can name it hand_tracking:
mkdir hand_tracking
cd hand_tracking
2. Open the project in your IDE
You can use any IDE you’re comfortable with, but we recommend:
Once you’ve chosen your IDE, open the hand_tracking
folder.
3. Create a virtual environment
It’s a good practice to use a virtual environment to manage your dependencies. Run the following commands to create a virtual environment using venv
and activate it:
On Windows:
python -m venv venv
venv\Scripts\activate
On Linux/MacOS:
python3 -m venv venv
source venv/bin/activate
[!TIP] You’ll know the virtual environment is activated when you see (venv) before your command line prompt.
4. Install the necessary libraries
With your virtual environment active, install the two main dependencies for this project:
- EymOS: The middleware we’ll be using to manage services.
- MediaPipe: For real-time hand detection using machine learning.
- OpenCV: For image processing and handling video streams.
- Tkinter: Required by the WindowService in EymOS to display the video.
Run this command to install the Python packages:
pip install eymos mediapipe pynput
[!CAUTION] Note for Windows Users: On Windows, pynput may require manual installation. If you encounter issues, please install it using the following command:
python -m pip install pynput
[!WARNING] Installing MediaPipe may take some time due to its size and the necessary dependencies. Please be patient as it downloads and installs the required packages.
Installing Tkinter
Tkinter is a standard GUI library for Python. Depending on your operating system, you may need to install it separately.
On Windows:
Tkinter comes bundled with the standard Python installer for Windows. If you encounter issues, make sure you have installed Python from python.org.
On MacOS:
Tkinter comes bundled with the standard Python installer for MacOS too. If you encounter issues, you can install it using the following commands with brew.
brew install tcl-tk
brew install python
On Ubuntu/Debian:
sudo apt-get update
sudo apt-get install python3-tk
[!NOTE] If you have trouble installing Tkinter, refer to the official Python documentation for detailed instructions based on your operating system.
Now that the environment is set up, you’re ready to start coding!
🔧 Building the Foundation
Now that the environment is ready, we’ll start by creating the core file that will initialize and manage our services. This file will serve as the entry point for our project.
1. Create the Core File
In your project folder, create a file called main.py
. Open the file and import the required dependencies adding this code:
from eymos import ServiceManager, log
from eymos.services import CameraService, WindowService
Next, we will create the main function where the services will be initialized, started, and stopped after a certain amount of time.
Add the following code to your main.py file:
# Initialize the service manager
manager = ServiceManager()
# Add the services to the manager
camera = manager.add("camera", CameraService)
window = manager.add("window", WindowService)
# Start the services
manager.start()
# Start the window main loop
log("Starting tkinter main loop...")
window.mainloop()
📝 What’s happening here?
ServiceManager
: Manages the lifecycle of all services, including starting and stopping them.CameraService
: This EymOS service handles the camera input, capturing real-time video.WindowService
: This EymOS service displays a black floating window, where the video will be shown.window.mainloop()
: AWindowService
method necessary to display the EymOS window.
[!NOTE] If you have a camera with an LED indicator, it should turn on when the camera service is running.
Now that we’ve set up the foundation, in the next section we’ll dive into creating our custom service.
🧠 Creating the Hand Tracking Service
Now, let’s create our custom service, which will handle the hand tracking logic. We will focus on building the service step by step, explaining each part of the code. The final goal is to have a fully functional HandTrackingService that integrates with EymOS.
1. Create the service file
In the services/
folder, create a new file named hand_tracking.py
. At the top of this file, import the necessary dependencies:
import cv2
import mediapipe as mp
from eymos import Service
- cv2: OpenCV library for image processing.
- mediapipe: Library for hand detection.
- Service: Base class from EymOS that we’ll extend.
2. Setting up the Class Structure
Define the basic structure of the HandTrackingService class by inheriting from Service:
class HandTrackingService(Service):
def init(self):
"""Initialize the service."""
pass
def destroy(self):
"""Clean up resources before stopping the service."""
pass
def before(self):
"""Prepare anything that needs to be initialized outside the main thread."""
pass
def loop(self):
"""Main loop where the hand detection logic will run."""
pass
We have four key methods:
init()
: Initialize service attributes.destroy()
: Clean up resources.before()
: Prepare heavy tasks outside the main thread.loop()
: Contains the main logic that runs repeatedly.
2.1 Service initialization
In the init() method, we’ll declare all the necessary attributes and configuration parameters that the service will use during its lifetime.
Add the following code to the init() function:
def init(self):
"""Initialize the service."""
# Placeholder for the hand detection model
self.__hand_detector = None
# Configuration parameters with default values
self.__model_complexity = self._config.get('model_complexity', 0) # Simplified model for real-time
self.__min_detection_confidence = self._config.get('min_detection_confidence', 0.5) # Detection threshold
self.__min_tracking_confidence = self._config.get('min_tracking_confidence', 0.5) # Tracking threshold
# Set the loop delay
self._loop_delay = 0.04 # Delay between each loop iteration (40 ms)
Explanation:
self.__hand_detector
: Will hold the MediaPipe Hands model instance.self._loop_delay
: Controls how often theloop()
method is called.- Configuration parameters: We use
self._config.get()
to retrieve values from the service configuration, providing default values if not set.
2.2 Service destruction
In the destroy()
method, we’ll clean up resources and reset attributes when the service stops.
Add the following code to the destroy()
function:
def destroy(self):
"""Clean up resources before stopping the service."""
if self.__hand_detector:
self.__hand_detector.close() # Release MediaPipe resources
self.__hand_detector = None
self.__model_complexity = None
self.__min_detection_confidence = None
self.__min_tracking_confidence = None
Explanation:
self.__hand_detector.close()
: Properly closes the MediaPipe Hands instance.- Resetting attributes: Ensures that all resources are released and garbage collected.
2.3 Preparing resources outside the main thread
The before()
method is ideal for initializing heavy resources. We’ll initialize the MediaPipe Hands model here.
Add the following code to the before()
function:
def before(self):
"""Prepare anything that needs to be initialized outside the main thread."""
self.__hand_detector = mp.solutions.hands.Hands(
model_complexity=self.__model_complexity,
min_detection_confidence=self.__min_detection_confidence,
min_tracking_confidence=self.__min_tracking_confidence
)
Explanation:
mp.solutions.hands.Hands()
: Initializes the hand detection model with the specified parameters.
2.4 Main loop
In the loop()
method, we’ll implement the hand detection logic that runs continuously.
Add the following code to the loop()
function:
def loop(self):
"""Main loop where the hand detection logic will run."""
# Get the CameraService from the service manager
camera_service = self._services.get('camera')
if camera_service is None:
return
# Get the latest frame from CameraService
frame = camera_service.get_frame()
if frame is None:
return
# Convert the frame from BGR to RGB as required by MediaPipe
image_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Process the frame to detect hands
results = self.__hand_detector.process(image_rgb)
# If hands are detected, draw landmarks on the frame
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp.solutions.drawing_utils.draw_landmarks(
frame, hand_landmarks, mp.solutions.hands.HAND_CONNECTIONS
)
# Display the processed frame in WindowService
window_service = self._services.get('window')
if window_service:
window_service.draw(frame)
Explanation:
- Accessing CameraService: Retrieves a frame from the camera.
- Frame Processing:
- Converts the frame to RGB format.
- Processes the frame with MediaPipe to detect hands.
- Drawing Landmarks: Overlays hand landmarks on the original frame.
- Displaying the Frame: Uses
WindowService
to display the processed frame.
2.5 Complete Code
Your hand_tracking.py
should now look like this:
import cv2
import mediapipe as mp
from eymos import Service
class HandTrackingService(Service):
def init(self):
"""Initialize the service."""
# Placeholder for the hand detection model
self.__hand_detector = None
# Set the loop delay
self._loop_delay = 0.04 # Delay between each loop iteration (40 ms)
# Configuration parameters with default values
self.__model_complexity = self._config.get('model_complexity', 0) # Simplified model for real-time
self.__min_detection_confidence = self._config.get('min_detection_confidence', 0.5) # Detection threshold
self.__min_tracking_confidence = self._config.get('min_tracking_confidence', 0.5) # Tracking threshold
def destroy(self):
"""Clean up resources before stopping the service."""
if self.__hand_detector:
self.__hand_detector.close() # Release MediaPipe resources
self.__hand_detector = None
self.__model_complexity = None
self.__min_detection_confidence = None
self.__min_tracking_confidence = None
def before(self):
"""Prepare anything that needs to be initialized outside the main thread."""
self.__hand_detector = mp.solutions.hands.Hands(
model_complexity=self.__model_complexity,
min_detection_confidence=self.__min_detection_confidence,
min_tracking_confidence=self.__min_tracking_confidence
)
def loop(self):
"""Main loop where the hand detection logic will run."""
# Get the CameraService from the service manager
camera_service = self._services.get('camera')
if camera_service is None:
return
# Get the latest frame from CameraService
frame = camera_service.get_frame()
if frame is None:
return
# Convert the frame from BGR to RGB as required by MediaPipe
image_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Process the frame to detect hands
results = self.__hand_detector.process(image_rgb)
# If hands are detected, draw landmarks on the frame
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp.solutions.drawing_utils.draw_landmarks(
frame, hand_landmarks, mp.solutions.hands.HAND_CONNECTIONS
)
# Display the processed frame in WindowService
window_service = self._services.get('window')
if window_service:
window_service.draw(frame)
📝 Summary:
init()
: Declares variables and sets default configurations.destroy()
: Cleans up resources and resets variables.before()
: Initializes the MediaPipe hand detector outside the main thread.loop()
: Processes each frame to detect and display hand landmarks.
🔗 Integrating the Hand Tracking Service
Now that we’ve created our HandTrackingService
, it’s time to integrate it into our main application and see it in action. In this section, we’ll add the custom service to the EymOS service manager and run the application.
1. Import the Hand Tracking Service
In your main.py
file, import the HandTrackingService
at the top:
from time import sleep
from eymos import ServiceManager, log
from eymos.services import CameraService, WindowService
from services.hand_tracking import HandTrackingService # HandTrackingService import here
2. Add the Hand Tracking Service to the Service Manager
Update your main()
function to include the HandTrackingService
:
# Initialize the service manager
manager = ServiceManager()
# Add the services to the manager
camera = manager.add("camera", CameraService)
window = manager.add("window", WindowService)
hand_tracking = manager.add("hand_tracking", HandTrackingService) # HandTrackingService added here
# Start the services
manager.start()
# Start the window main loop
log("Starting tkinter main loop...")
window.mainloop()
📣 Run the Application
Make sure your virtual environment is activated, and run the application from your project directory:
python main.py
You should see a window displaying the camera feed with hand landmarks drawn over detected hands in real time.
[!NOTE] If you encounter any issues, double-check that all dependencies are installed and that your camera is properly connected.
📋 Is something unexpected?
- No Camera Feed: Ensure that your webcam is connected and not being used by another application.
- Import Errors: Verify that all libraries (eymos, mediapipe, opencv-python) are installed in your virtual environment.
- Tkinter Issues: If the window doesn’t display, revisit the Tkinter installation steps for your operating system.
🎉 Congratulations
You’ve successfully integrated your custom HandTrackingService
into an EymOS application. Here’s what you’ve accomplished:
- Set Up the Project Environment: Created a virtual environment and installed necessary libraries.
- Built a Custom Service: Developed a
HandTrackingService
with EymOS, OpenCV, and MediaPipe. - Managed Services with EymOS: Learned how to add and manage services using the
ServiceManager
. - Real-Time Hand Detection: Implemented a system that detects and visualizes hand movements in real time.
Feel free to experiment with the code, adjust parameters, or add new features!
Thank you for following this guide! We hope it was helpful and that you enjoyed building your first robotics project with EymOS. Happy coding! 🤖🚀