PhotonVision Colour & Shape Recognition - frc-7603/VespaRobotics2024-25 GitHub Wiki
PhotonVision Colour & Shape Recognition
PhotonVision is capable of recognizing both shapes and colours.
Setting up PhotonVision with shapes and colors
Contours tab
Target Orientation: Landscape (Others may work; not tested)
Target Sort (0-4000): Changes the order in which targets are sorted
Area (0-100): The Minimum/Maximum area of the given shape
Fullness (0-66): The required fullness of the shape (the threshold of how full the shape needs to be to be considered a shape)
Speckle Rejection (4): How many speckles should be rejected
Target Shape (Circle): The shape that needs to be detected
Circle match distance (5): How close the centroid of a contour must be to the center of the circle in order for them to be matched
Max Canny Threshold (90): Sets the amount of change between pixels needed to be considered an edge
Shape Accuracy (10): How accurate a target must be in order to be detected as a shape
Radius (100): Percentage of the frame that the radius of the circle represents
Threshold Tab
Hue (58-91, Green to Light Blue): The range the hue must be in order to be detected
Saturation (133-255): The range the saturation must be in order to be detected
Value (32-207): The darkness value range of the color needs to be in order to be detected
Invert Hue (False): Inverts hue
Targets
PhotonVision integrates with PathPlanner through WPILib's pose estimation system to enable accurate robot navigation in FRC. Here's the technical implementation:
Core Integration Components
-
Pose Estimation Pipeline
PhotonVision uses AprilTag detection with the SolvePNP algorithm to calculate 3D camera positions relative to field tags[3][4]. This raw vision data is transformed using:- Camera-to-robot spatial offsets
- Field layout knowledge (AprilTag positions)
- Robot kinematics data
-
Sensor Fusion
Vision measurements are combined with odometry using WPILib'sSwerveDrivePoseEstimator:
// Example from team implementation[5]
poseEstimator.addVisionMeasurement(
estimatedPose,
timestamp,
VecBuilder.fill(distanceToTag/2, distanceToTag/2, 100) // Trust closer measurements more
);
This Kalman filter-based approach weights vision data less as distance from tags increases[2].
PathPlanner Integration Workflow
-
Auto Path Generation
PathPlanner creates Bézier curve trajectories using:- Field-relative waypoints
- Robot constraints (max velocity/acceleration)
- Initial pose estimate from fused sensors
-
Real-Time Correction
During path following:
// Simplified control loop[4][5]
1. PathPlanner generates target chassis speeds
2. PhotonVision provides updated pose estimates
3. Pose estimator corrects odometry drift
4. Controller adjusts wheel velocities accordingly
Key Implementation Details
- Multi-Tag Advantage: Using ≥2 visible AprilTags improves pose estimation accuracy by 40-60% compared to single-tag solutions[2][6]
- Coordinate System Alignment: Both systems must use the same field origin (typically blue alliance bottom-right corner)[2][7]
- Time Synchronization: Vision measurements use precise timestamps (2.5px[6]
- Path Planning:
- Use vision-corrected pose for initial localization
- Create paths with vision checkpoint regions (tag-rich areas)
- Set auto triggers based on pose zones[1][7]
Teams using this integration typically achieve <2cm positional accuracy during autonomous periods, even at full 4.5m/s swerve speeds[5][7]. Proper implementation requires careful attention to coordinate transforms and measurement timestamping to avoid cumulative errors.
Citations: [1] https://docs.photonvision.org/en/latest/docs/integration/advancedStrategies.html [2] https://www.chiefdelphi.com/t/photonvision-swerveposeestimator-produces-wrong-pose/430906 [3] https://github.com/PhotonVision/photonvision-docs/blob/master/source/docs/integration/advancedStrategies.rst [4] https://docs.photonvision.org/en/latest/docs/examples/poseest.html [5] https://samliu.dev/blog/a-deep-dive-into-swerve [6] https://www.chiefdelphi.com/t/multi-camera-setup-and-photonvisions-pose-estimator-seeking-advice/431154 [7] https://github.com/HighlanderRobotics/Charged-Up [8] https://docs.wpilib.org/en/stable/docs/software/advanced-controls/state-space/state-space-pose-estimators.html
Answer from Perplexity: pplx.ai/share