Drivebase Photonvision Project Writeup (PHOTONVISION APRIL TAGS) - quasics/quasics-frc-sw-2015 GitHub Wiki

unfinished

Installation

Raspberry pi installation instructions

Download the raspberry pi image from the photonvision releases. Use BalenaEtcher to flash the image onto a MicroSD card, and then insert the MicroSD into a raspberry pi. Provide power and wait for installation.

Wiring

Connect the raspberry pi to the RoboRio for power (usb into RoboRio, usb-c into pi). Connect the pi to the camera over ethernet.

Photon Vision Dashboard

Go to photonvision.local:5800 when connected to the robot for the photon vision dashboard. Make sure the correct april tag family is selected to be able to track april tags.

PLEASE NOTE. In order for the Estimation to Work Successfully, Photonvision must be extensively calibrated to its designated setting. Please see this (https://docs.photonvision.org/en/latest/docs/getting-started/pipeline-tuning/calibration.html)

Pose Estimation

Photon Vision pose estimator documentation

Create a camera with

photonlib::PhotonCamera camera{"USB_Camera"};

Create an AprilTagFieldLayout. For a game, WPILib provides a file with the april tag layouts for the game. For other testing, an AprilTagFieldLayout can be created with a vector of AprilTag objects like in the example below (example measurements are used).

  std::vector<frc::AprilTag> tags = {
      {586, frc::Pose3d(0_in, 0_in, 17_in, frc::Rotation3d())},
      {0, frc::Pose3d(60_in, 265.0_in, 20.5_in,
                      frc::Rotation3d(0_deg, 0_deg, -90_deg))},
      {1, frc::Pose3d(26.5_in, 253.0_in, 20.5_in,
                      frc::Rotation3d(0_deg, 0_deg, -90_deg))},
      {585, frc::Pose3d(126_in, -62_in, 17_in,
                        frc::Rotation3d(0_deg, 0_deg, 180_deg))}};

  frc::AprilTagFieldLayout aprilTags =
      frc::AprilTagFieldLayout(tags, 54_ft, 27_ft); // FRC field size is 54 by 27 feet

Measure the distance from the camera to the center of the robot. If the camera is on the front of the robot and facing forward, the positive X direction should be used for the Transform3d with 0 rotation.

  frc::Transform3d robotToCam =
      frc::Transform3d(frc::Translation3d(0.3048_m, 0_m, 0.0_m),
                       frc::Rotation3d(0_rad, 0_rad, 0_rad));

Finally, create a PhotonPoseEstimator. Other options besides CLOSEST_TO_REFERENCE_POSE are in the photonvision documentation.

  photonlib::PhotonPoseEstimator estimator = photonlib::PhotonPoseEstimator(
      aprilTags, photonlib::CLOSEST_TO_REFERENCE_POSE,
      photonlib::PhotonCamera("USB_Camera"), robotToCam);

Using the PhotonPoseEstimator in the Drivebase

Read the WPILib pose estimator documentation. It combines vision measurements from PhotonPoseEstimator with measurements from the encoders and gyro to get the robot's position even when it can't see an AprilTag. Example code below

  frc::DifferentialDriveKinematics m_kinematics{
    units::meter_t{RobotPhysics::TRACK_WIDTH_INCHES_GLADYS}};

  frc::DifferentialDrivePoseEstimator m_poseEstimator{
    m_kinematics, m_gyro.GetRotation2d(), GetLeftDistance(),
    GetRightDistance(), frc::Pose2d{}};

  PhotonLibVision m_PhotonVision;

The DifferentialDrivePoseEstimator should be updated with regular odometry measurements

  m_poseEstimator.Update(m_gyro.GetRotation2d(), GetLeftDistance(),
                         GetRightDistance());

The previous calculated positions should also be regularly stored in m_poseEstimator. Do this in the periodic function

auto result = m_PhotonVision.UpdateFieldPosition(
      m_poseEstimator.GetEstimatedPosition());
  if (result) {
    m_poseEstimator.AddVisionMeasurement(
        result.value().estimatedPose.ToPose2d(), result.value().timestamp);

This code puts basic debug output on the SmartDashboard

  frc::Pose2d robotPose = GetEstimatedPose();
  frc::SmartDashboard::PutNumber("X position", robotPose.X().value());
  frc::SmartDashboard::PutNumber("Y position", robotPose.Y().value());
  frc::SmartDashboard::PutNumber("Yaw", robotPose.Rotation().Degrees().value());

Other April Tag Uses

Alternative Method of Maneuvering Robot via april tag orientation

The camera will use its known position and angle and from the orientation of the april tag determine the robots position, from there the robot has the ability to face the tags geometric center and drive to it. For this please look here (https://docs.photonvision.org/en/latest/docs/examples/index.html)

Here is the example code below this will be in the PhotonVision.cpp file

bool PhotonLibVision::AprilTagTargetIdentified(int IDWantedTarget) {
  photonlib::PhotonPipelineResult result = camera.GetLatestResult();
  if (result.HasTargets()) {
    std::span<const photonlib::PhotonTrackedTarget> targets =
        result.GetTargets();
    for (const photonlib::PhotonTrackedTarget& target : targets) {
      int targetID = target.GetFiducialId();
      if (targetID == IDWantedTarget) {
        // std::cout << "ID " << IDWantedTarget << "Acquired" << std::endl;
        return true;
      }
    }
  }
  return false;
}

std::optional<photonlib::PhotonTrackedTarget>
PhotonLibVision::GetIdentifiedAprilTarget(int IDWantedTarget) {
  photonlib::PhotonPipelineResult result = camera.GetLatestResult();
  if (result.HasTargets()) {
    for (const photonlib::PhotonTrackedTarget& target : result.GetTargets()) {
      int targetID = target.GetFiducialId();
      if (targetID == IDWantedTarget) {
        return target;
      }
    }
  }
  return std::nullopt;
}


bool PhotonLibVision::CalculateDistanceAndAnglesToTarget(
    int idWantedTarget, units::meter_t& distance, units::degree_t& pitchTarget,
    units::degree_t& yawTarget) {
  auto possibleTarget = GetIdentifiedAprilTarget(idWantedTarget);
  if (!possibleTarget.has_value()) {
    return false;
  }

  auto target = possibleTarget.value();

  const units::degree_t pitchToTarget(target.GetPitch());
  const units::degree_t yawToTarget(target.GetYaw());
  units::meter_t range = photonlib::PhotonUtils::CalculateDistanceToTarget(
      PhotonVisionConstants::CameraAndTargetValues::CAMERA_HEIGHT,
      PhotonVisionConstants::CameraAndTargetValues::TARGET_HEIGHT,
      PhotonVisionConstants::CameraAndTargetValues::CAMERA_PITCH,
      pitchToTarget);

  distance = range;
  pitchTarget = pitchToTarget;
  yawTarget = yawToTarget;
  return true;
}

The constants will be measured values. For refrences visit the constants.h file

An example command for utilizing these functions to drive to the april tag can be found in the 2023 Charged Up Project under the AprilTagDriveToTarget.cpp

The main portion of the code that goes into the initialize and execute is here:

void AprilTagDriveToTarget::UpdateDrivingParameters() {
  bool targetFound =
      m_photonLibVision->AprilTagTargetIdentified(m_targetToDriveTo);
  if (targetFound == true) {
    m_photonLibVision->CalculateDistanceAndAnglesToTarget(
        m_targetToDriveTo, m_distance, m_pitchTarget, m_yawTarget);
    double forwardSpeed = -forwardController.Calculate(
        std::abs(m_distance.value()),
        PhotonVisionConstants::CameraAndTargetValues::GOAL_RANGE_METERS
            .value());
    // adding extra speed

    if (forwardSpeed > 0) {
      forwardSpeed = forwardSpeed + 0.2;
    } else {
      forwardSpeed = forwardSpeed - 0.2;
    }
    // deleted a negative sign in rotationSpeed
    double rotationSpeed = turnController.Calculate(m_yawTarget.value(), 0);

    m_drivebase->ArcadeDrive(forwardSpeed, rotationSpeed);
    m_drivebase->SetBrakingMode(true);
  }
}

Essentially it is utililzing the functions in the Photonvision.cpp file and then adding buffer zones to make sure the robot can overcome static friction. The "forwardController" and the "turnController" are basic PID controller Setups that can be found in the .h file. and their values were manually tuned and can be found in the constants.h file

The is finished will simply be a check for if the robot is within its boundaries of error:

bool AprilTagDriveToTarget::IsFinished() {
  if (std::abs(m_distance.value()) <
          (PhotonVisionConstants::CameraAndTargetValues::GOAL_RANGE_METERS
               .value()) &&
      ((m_yawTarget > -2_deg) && (m_yawTarget < 2_deg))) {
    return true;
  }
  return false;
}

Trajectories

Brief Note: Refrence the "Drivebase Test + PhotonVision" under the experimental 2023 directory. Then access the TrajectoryGenerator.cpp file.

The first constructor is the default code for building trajectories via pathweaver. Almost a one for one copy/paste from WPILIB (found here https://docs.wpilib.org/en/stable/docs/software/pathplanning/trajectory-tutorial/index.html and here https://docs.wpilib.org/en/stable/docs/software/pathplanning/pathweaver/integrating-robot-program.html). This does not use vision currently.

The 2nd Constructor can create any trajectory, but is used in our case in tandem with April Tag Code to be able to drive straight lines based on the input given from april tags. The code that utilizes it can be found in the same project "Drivebase Test + PhotonVision" under the RobotContainer.cpp. Here is the code below:

frc::Pose2d startingPose = m_drivebase.GetEstimatedPose();
  units::degree_t startingAngle = startingPose.Rotation().Degrees();
  
  units::meter_t endingXPosition = startingPose.X() + distance * startingPose.Rotation().Cos();
  units::meter_t endingYPosition = startingPose.Y() + distance * startingPose.Rotation().Sin();

  frc::TrajectoryConfig config{0.2_mps, 0.4_mps_sq};
  bool driveForward = true;
  if (distance < 0_m) {
    driveForward = false;
  }
return BuildTrajectoryUsingAprilTags(startingPose, frc::Pose2d(endingXPosition, endingYPosition, startingAngle), driveForward, &m_drivebase);

The 3rd Constructor creates a manual trajectory (a.k.a. not using pathweaver) by the user providing the startpoints, endpoints, midpoints etc. Remember that the 3rd constructor does not use vision, so whatever direction the robot is facing will become the positive x direction.