State Estimation - Pitt-RAS/iarc7_common GitHub Wiki
The drone has a variety of sensors it uses for state estimation. The flight controller has its own state estimators for calculating the orientation and rate of rotation of the drone which are extremely accurate, precise, and fast, so we use those instead of writing our own. This leaves the task of determining the drone's linear acceleration, velocity, and position. For this, the drone uses its accelerometer, laser rangefinder altimeters, and cameras.
Accelerometer
The accelerations are measured from the accelerometer on the flight controller. For more information, take a look at the Flight Controller page.
Altimeters
The drone reads data from Lidar-Lite v2 and VL53L0X laser rangefinders. For each, it does a transformation based on the drone's orientation, and spits out the raw (untransformed) distance on /altimeter_reading
and an estimated pose with covariance on /altimeter_pose
(only the z axis is relevant). The code for all of this is in the src
directory of the iarc7_sensors
package, mainly in altimeter.cpp
and AltimeterFilter.cpp
.
Cameras
Currently only the bottom-facing camera is used for localization. This camera is processed using both a Lucas-Kanade Sparse Optical Flow algorithm that produces horizontal velocity estimates and a custom grid-finding algorithm that produces a absolute horizontal position estimates and publishes them on /camera_localized_pose
.
Extended Kalman Filter
All of these are fed into an Extended Kalman Filter, which basically is able to check all of our sensors against each other, figure out what measurements are more likely to be correct, and combine them to produce estimates of our acceleration, velocity, and position in the x, y, and z directions. The implementation we use is our own fork of the robot_localization package.