Testing guide - supriyak2003/eyecontrol GitHub Wiki

Purpose: This application allows users to control their computer mouse using only their eye movements and blinks. It uses facial landmark detection to track eye positions and simulate mouse movements and clicks.

Key Components: Backend:

Libraries: Uses OpenCV for video capture, Mediapipe for facial landmark detection, and PyAutoGUI for controlling the mouse. Face Detection: Tracks specific facial landmarks, focusing on the eyes for cursor movement and blink detection for clicks. Mapping: Maps eye movement to screen coordinates and triggers mouse actions like clicks based on blink detection. Frontend:

Real-time Display: Displays the webcam feed with visual overlays (tracked landmarks). Feedback: Provides visual feedback by drawing circles around tracked points to show eye movements and click actions. Testing:

Functional testing ensures accurate face detection, eye tracking, and click actions. Edge case testing checks the behavior with no face, multiple faces, or different lighting conditions. Performance testing ensures smooth real-time operations and minimal resource usage. Usability testing focuses on ease of use, user feedback, and click accuracy.