System description - robotique-udes/rover_udes GitHub Wiki
This page gives a high level understanding of how the system works.
The system can be separated into 7 different subsystems for simplicity:
- Inputs: this subsystem includes all sensors and controls such as gamepads and joysticks
- Control: this subsystem up manages all commands sent to actuators
- Odometry: this subsystem manages the robots localization and movements
- Autonomous system: this subsystem manages the autonomous driving capability of the robot
- Visualization: this subsystem manages visualization of the robot's telemetry
- Arm: this subsystem manages the manipulation of the robot's arm
- Communication: this subsystem manages communication with the base station
A Logitech gamepad is used for manually piloting the rover. The ROS joy package is used to get the gamepad's button events and publish them to other nodes.
USB cameras are used for the pilot to see what is around the robot. It is also used to detect and read markers such as ArUco markers an QR codes. The camera on the PTU (pan and tilt unit) will also be used to generate a panorama. The ROS package used to stream the video feed is video_stream_opencv
The Intel RealSense D455 is a depth camera, which means it can see in 3D. This is useful because it can be used to detect obstacles, which is essential for autonomous driving. It also contains an IMU, which can be useful to get rotational information and linear acceleration. The ROS package realsense2 is used to get the data from the camera and publish it to other nodes.
There is a wheel encoder on each of the four wheel of the rover. Wheel encoders measure the rotation of each wheel. This information is important for the odometry subsystem as the amount of rotation can be converted to linear mouvement if you know the wheel's diameter. There is no node that publishes the encoder data directly. It is instead fetched by our own package differential_drive which uses it as part of the odometry subsystem.
IMU (inertial measurement unit) is a device containing sensors such as a magnetometer, a gyroscope and an accelerometer. It can be used to measure the angular position, angular velocity and linear acceleration. All of these measurements can be used to estimate the robot's odometry. The robot's IMU is the Adafruit bno055 9-DOF Absolute Orientation IMU
The GPS makes it possible for the robot to localize itself in the world. It is essential to travel to a WGS84 coordinate. The GPS used is the USGLOBALSAT BU-353-S4 and the ROS package used to publish the data is the nmea_navsat_driver.
This subsystem manages the commands sent to the actuators, i.e. the wheels and the PTU.
First of all, our ROS node teleop_joystick subscribes to the gampad's joy topic and published twist velocity commands for both the wheels and the PTU. Both of these topics are subscribed by our ROS node cmd_mux which also subscribes to other command sources such as the autonomous system and the GUI. Which source to output is decided based on the sources priority. The priority list is the following (a lower number is a higher priority):
- manual commands (gamepad)
- GUI commands
- autonomous commands.
This ensures that if the autonomous system fails or the GUI fails or gives bad commands, the pilot can instantly do a manual override with the gamepad. After cmd_mux publishes the commands, they will be executed by the our PTU controller node and our base velocity controller differential_drive
Odometry is defined as the estimation of the movement of the robot from a certain starting point (usually the initial position when the robot is turned on). Many different sensors can estimate the odometry, the most simple being wheel encoder with which you can tell how far the robot has moved by measuring the amount of rotation each wheel has turned. However, this is not the most accurate way of estimating the odometry because the wheels will inevitably slip on the ground, causing odometry drift. Other sensors that can be used are IMU's for rotation and linear acceleration, GPS for global absolute positioning and cameras for visual odométrie. All of them have their own advantages and draw backs.
In order to get a more precide odometry estimation than any one sensor could provide, it is possible to combine multiple measurements by using what is called an Extended Kalman Filter (EKF). A great package that implements an EKF is the robot_localization package.
In our case we use two instances of robot_localization's EKF node, one for local odometry and one for global odometry. For local odometry we only use continuous sensors, aka no GPS, which is susceptible to discrete jumps. This is done so the the filtered odometry can be smooth enough to be used by the autonomous system to easily plan paths. The global odometry uses all sensors, including the GPS, so that we can get an odometry estimation that will not drift over time, thanks to the GPS. That way we can easily give a GPS coordinate goal to the robot, which would be impossible with only intrinsic sensors.
However, GPS data can't be directly used with the EKF, it first has to be converted to odometry data whit the navsat_transform_node. More information about integrating GPS data into the EKF can be found here
The autonomous system is what enables the rover to drive to a destination on it's own while avoiding obstacles on it's path. Do do this, the ROS navigation stack is used, which is comprised of these packages:
- move_base http://wiki.ros.org/move_base
- Cost Map costmap_2d http://wiki.ros.org/costmap_2d
- Cost Map Obstacle Layer http://wiki.ros.org/costmap_2d/hydro/obstacles
- Cost Map Static Layer http://wiki.ros.org/costmap_2d/hydro/staticmap
- Global Planner Navfn http://wiki.ros.org/navfn
- Local Planner base_local_planner http://wiki.ros.org/base_local_planner
All configurations specific to our robot can be found in our rover_nav package. Basically the inputs needed are it's local odometry message and frame to know where the robot is located, the point cloud from the RealSense camera to detect obstacles and a goal to reach. The output is a velocity command for the base velocity controller.
To understand what is going on with the robot it is important to be able to visualize it's state and it's data. Visualization is much more intuitive that looking at raw numerical data and it can also be necessary in order to teleoperate the robot to make sure it stays safe.
To accomplish we use three programs:
RViz is a 3D viewer that can visualize any sensor data as well as the robot's state. You can even use it to send goal poses to the autonomous system by simple clicking and dragging in the environment.
MapViz is similar to RViz but it is limited to being in a 2D top down view. It's not quite as useful and versatile as RViz but one thing it can do that RViz can't is display a tile maps such as Google Maps' satellite imagerie, which is useful to easily get a sense of where the robot is located in the world.
Our GUI is used for everything that the other two can't do. It is used to visualize information specific to our rover and execute certain tasks such as taking a panorama or sending GPS position goals to the autonomous system.