PhotonVision - hammerheads5000/Team-Documentation GitHub Wiki

PhotonVision is the system we use for computer vision. We typically use it with a Limelight, a camera/coprocessor with VERY bright green LEDs, so be careful of your eyes. Alternatively, we have started using Orange Pis, which are just processors so you also need to plug in a camera.

We use PhotonVision to (relatively) easily set up computer vision on the robot. In the code, we use PhotonLib to interface with the processors and cameras. For more info, see the PhotonVision documentation.

Installation

Also, for code, install PhotonLib.

Pipelines

PhotonVision and Limelight support multiple pipelines, which are systems for detecting specific types of objects/images. We mainly use AprilTags with 3D-tracking, but there are also ways of detecting colors, shapes, or reflective surfaces.

Tuning

Before use, use the browser-based UI to set camera settings, pipeline, and the like. It can be accessed at http://photonvision.local:5800/. Make sure you calibrate the camera using PhotonVision's calibration guide.

Coding

With NetworkTables

Using NetworkTables is more efficient when you only track single targets and do not need pose estimation, such as looking for a game piece on the floor. See PhotonVision's documentation for full reference.

PhotonVision values are stored under the photonvision NetworkTable and a subtable with your camera's nickname.

Useful Values

  • double latencyMillis: The latency of the pipeline in milliseconds
  • boolean hasTarget: Whether the pipeline is detecting targets or not
  • double targetPitch: The pitch of the target in degrees (positive up: a target above the camera's center returns a positive angle)
  • double targetYaw: The yaw of the target in degrees (positive right: a target to the right of the camera's center returns a positive angle)
  • double[] targetPose: The pose of the target relative to the robot (as a list of doubles: x, y, z, qw, qx, qy, qz)

Example to Print Yaw When Target Found

NetworkTableInstance inst = NetworkTableInstance.getDefault();
NetworkTable noteVisionTable = inst.getTable("photonvision/Note Detection Limelight");

DoubleTopic noteYawTopic = noteVisionTable.getDoubleTopic("targetYaw");
BooleanTopic noteHasTargetTopic = noteVisionTable.getBooleanTopic("hasTarget");
DoubleSubscriber yawSubscriber = noteYawTopic.subscribe(0.0);
BooleanSubscriber hasTargetSubscriber = noteHasTargetTopic.subscribe(false);

if (hasTargetSubscriber.get()) {
  System.out.println(yawSubscriber.get());
}
else {
  System.out.println("No target found");
}

With PhotonLib

PhotonLib is better for more complex uses such as pose estimation. Use the PhotonLib documentation for reference.

Getting Basic Info

To create a camera:

// Change this to match the name of your camera set in the UI
PhotonCamera camera = new PhotonCamera("Camera Name");

Then, to get the result from the pipeline:

var result = camera.getLatestResult();

If you need to see if there are any targets:

bool hasTargets = result.hasTargets();

Then, to get the best-tracked target:

PhotonTrackedTarget target = result.getBestTarget();

Getting Field-Relative Pose

For tracking position, you need the layout of AprilTags in the field, which we retrieve from a file from WPILib like so:

AprilTagFieldLayout aprilTagFieldLayout = AprilTagFieldLayout.loadFromResource(AprilTagFields.k2023ChargedUp.m_resourceFile);

Next, you need the robot to camera transform, which you define in Constants.java like this:

Transform3d robotToCam = new Transform3d(new Translation3d(0.5, 0.0, 0.5), new Rotation3d(0,0,0));

Which describes a camera half a meter forward and left of the robot's center.

With these, you can create a PhotonPoseEstimator, which will estimate robot pose based on the AprilTags:

PhotonPoseEstimator photonPoseEstimator = new PhotonPoseEstimator(aprilTagFieldLayout, PoseStrategy.MULTI_TAG_PNP_ON_COPROCESSOR, camera, robotToCam);

To get the robot position, then, use:

EstimatedRobotPose estimatedPose = photonPoseEstimator.update(); // Returns pose with timestamp
Pose3D pose = estimatedPose.estimatedPose; // Get the Pose3d

We then combine the pose from the limelight(s) and the swerve odometry to get a final combined pose.