Vision - FRCteam1719theUmbrellaCorporation/documentation GitHub Wiki

Vision is handled by the Limelight cameras. The camera has its own CPU and can do most of the processing by itself. This includes:

  • Detecting AprilTags, including both their identifier, distance, angle and orientation.
  • Pose (position on the field) estimation, based on April tags.
  • Object detection.

Note: see "Troubleshooting" below for some tips if stuff goes wrong.

Pose (position on field)

For autonomy it's useful to be able to know where the robot is at any given moment. This is called "pose" estimation. While you can get a guess for post from the Swerve Drive (pose estimation), that calculation is based on dead reckoning, i.e., it starts from a known position and then updates its guess based on how the wheels have moved. Even if you set the initial position properly, this might not continue to be an accurate estimate as time goes on.

The Limelight camera can calculate your X, Y (and Z) position on the field, provided that it knows the exact location of all the April tags that it might see on the field. This information comes from FRC in a JSON file that should be linked here. You'll need to provide your own tag positions if you're building a test field that has a different setup.

The good news is that the Limelight camera will do most of the work for you. You can get this information from the limelight Network Tables (see the LimeLight APIs for details) and particularly from the `botpose' array. It contains several different variants, centered on different origins on the field, as well as information about the number of tags we can see on the field.

The bad news is that we (currently) don't know how confident the camera is about the results. We also don't know how to combine the vision from two different cameras to get a more reliable result than we get from one camera alone. These are all exciting open questions!

Combining the vision pose estimate with Swerve Drive

The Swerve Drive library (both yagsl and the CTRE version) do dead-reckoning pose estimation automatically (we think!) There is also an API call in the SwerveDrive module that lets you add a "vision measurement" which is copied from the Limelight cameras. This call is described here.

m_poseEstimator.addVisionMeasurement(visionMeasurement2d, Timer.getFPGATimestamp());

Field maps

We've tested this and it seems to work. HOWEVER, there is an important caveat: the robot needs to have a map of the field, which tells it where the AprilTags are. There is a tool for building this map It outputs a .fmap file which is a JSON file that gives the details of the location of each tag on the field.

Moving to the location of a tag

Sometimes you want to implement a command that tells the robot to drive directly to the location of an AprilTag. We've tested this in the 2025 code and it works, but needs to be documented here. Rough idea:

  1. You can ask the LimeLight library to give the pose of a target tag (i.e., the tag that it sees in its vision.) You need to ask for this in robot coordinates.
  2. These are measured with the robot at 0,0 (or 0,0,0 if you're in 3D.) See this page for a detailed explanation of the different coordinate systems and exactly what they mean.
  3. You can then tell PathPlanner to dynamically go to that location. This is in the code but the exact command needs to be documented.
  4. The robot will drive there. Often it will crash straight into the tag, so you have to be a little careful about your configurations and offsets.

Configuring the Limelight camera

The camera has a bunch of internal settings that need to be configured. One of the most important of these is the location of the camera on the robot chassis. There is a configurator tool that sets this. We need to document how this works.

Troubleshooting

We ran into various issue along the way. Here are a few:

  • We bricked the Limelight cameras (two of them!) by upload a bad .fmap file. This seemed like it was Game over for the cameras. Turns out they were only messed up if they could see an AprilTag. Once we hid the AprilTags, they stopped being unresponsive.

  • The robot coordinates are almost always different from what the docs say. You'll get Target Coordinates when you ask for Robot Coordinates.