Vision - team1868/1868wiki GitHub Wiki

2022

As of 2022, code is ran on a Limelight 2+ using a Python pipeline and custom python + OpenCV scripts.

Vision documentation / help:

The Limelight default variables are sent over NetworkTables, as well as anything we want to send in a llpython array (the function in the Python pipeline will return llpython). Sending information from the robot to the Limelight utilizes the llrobot array, also sent over NetworkTables.

Use a HSV threshold to pick up only the reflected green, and then figure out the target from there using area / clustering parameters.

Limelight reads images as an OpenCV mat, which we can draw on and read pixel information from.

The default pipeline uses HSV thresholding and built-in clustering, and returns tx and ty, the offset of the detected target from the crosshair (the center point, which would be (160, 120)) if the origin is in the top left and positive is down / right. There's no Python / coding involved in this.

PhotonVision is basically a better version of the default pipeline. It takes care of HSV thresholding, areas (and ratios), clustering, etc. On the robot side, we can also import PhotonVision, which has built in distance calculation functions. It also has built-in LED support.

PhotonVision documentation:

Vision + localization example codes:

2023

Code is currently ran on two OPIs with two Arducams with PhotonVision. PhotonVision checks if there are Apriltags and calculates distance, returning a pose that is added to the estimator. Most recent calibration configs are in the frc2023 repo. Vision/PhotonVision documentation: