Frontend Development Guide - supriyak2003/eyecontrol GitHub Wiki

  1. Real-Time Video Feed Display: Capture Video: Use OpenCV to capture frames from the webcam. cam = cv2.VideoCapture(0) Flip Video Horizontally: For a mirror effect, flip the frame horizontally. frame = cv2.flip(frame, 1)
  2. Visual Feedback for Eye Tracking: Draw Circles on Landmarks: Display visual feedback by drawing small circles around tracked landmarks (eye regions) using cv2.circle(). cv2.circle(frame, (x, y), 3, (0, 255, 0)) # for example, around eye landmarks
  3. Real-time Feedback and Debugging: On-screen Landmark Updates: Continuously update the landmarks on the screen to reflect the tracking process. Track Cursor Movement: As the eye landmarks change, move the cursor (using pyautogui.moveTo()) and show it in the display.
  4. User Interaction for Clicks: Click Detection: Use visual feedback (such as blink detection or eye gesture) and provide immediate updates on the screen. When a blink is detected (i.e., a sudden change in eye landmarks), trigger a click action. if (left[0].y - left[1].y) < 0.004: # Blink detection pyautogui.click()
  5. Frame Display: Show Processed Frame: Use cv2.imshow() to display the processed frame that includes both the video feed and visual overlays (landmarks, cursor position). cv2.imshow('Eye Controlled Mouse', frame)
  6. Handle User Exit: Close Application: Allow the user to exit the application gracefully by responding to keypress events (cv2.waitKey(1)). Release Resources: Properly release the camera and close the OpenCV window after exiting. cv2.waitKey(1)
  7. User Interface Design (Minimal): Real-Time Interaction: The primary interface is the webcam feed with dynamic updates, minimal UI, and no external windows. No Buttons or Overlays: Focus on the video display and interaction using the face's landmarks. Keep the interface clean and simple for the user.