GTRI based Sonar Notes and Description - Field-Robotics-Lab/dave GitHub Wiki
Summary: The GTRI-based Sonar is similar to the ROS Melodic GPU Ray plugin which we present at NPS GPU Ray-based FLS. Here we port it to work on ROS Melodic with Gazebo 9 and show usage.
The GTRI-based Sonar model at https://github.com/Field-Robotics-Lab/gtri_based_sonar is based on and is forked from the SyllogismRXS Sonar model at https://github.com/SyllogismRXS/syllo-gazebo, ref. "A Computationally-Efficient 2D Imaging Sonar Model for Underwater Robotics Simulations in Gazebo", Georgia Tech . This fork has been modified as follows:
- It has been ported to work with ROS Melodic and Gazebo 9.
- The algorithm for converting the point cloud from the Gazebo Ray module to a Gazebo camera image has been merged into the Gazebo module. The image is no longer calculated and advertised on a separate ROS Node.
- Modules not related to modeling Sonar have been removed in order to simplify the repository.
Example Usage
See https://github.com/Field-Robotics-Lab/DAVE/wiki/Sonar-Models#gtri for usage and example screenshot.
.xacro Interface
Here is the xacro macro for instantiating a named Sonar instance onto a parent link:
Name: gtri_based_fls
.
Parameters: namespace
, suffix
, parent_link
, topic
, *origin
.
ROS Interface
Here are the topics published by this Sonar model:
Message type | Suggested topic name | Description |
---|---|---|
sensor_msgs::PointCloud |
sonar_cloud |
The point cloud is calculated from Gazebo's RaySensor interface. |
sensor_msgs::Image |
sonar_image |
The Sonar image calculated from the point cloud. |
sensor_msgs::CameraInfo |
sonar_image_camera_info |
The height, width, and timestamp of the currently available Sonar image. |
sensor_msgs::PointCloud
Composition of PointCloud
consists of a header, a number of xyz points, and the same number of values, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/PointCloud.html:
- std_msgs/Header header
- uint32 seq
- time stamp
- string frame_id
- geometry_msgs/Point32[] points
- float32 x
- float32 y
- float32 z
- sensor_msgs/ChannelFloat32[] channels | string name
- float32[] values
sensor_msgs::Image
:
Composition of Image
consists of a header, height, width, encoding information, and uint8[]
data, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Image.html:
- std_msgs/Header header
- uint32 seq
- time stamp
- string frame_id
- uint32 height
- uint32 width
- string encoding
- uint8 is_bigendian
- uint32 step
- uint8[] data
sensor_msgs::CameraInfo
:
Composition of CameraInfo
contains metatata about the camera image, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html.
Implementation
The implementation is divided into two parts, 1) point cloud and 2) camera image:
- The point cloud is created using averaging functions on range and retro values from
physics::MultiRayShape
, seegazebo_ros_block_laser.cpp
. - The point cloud is transformed to the location of the sensor, randomization is applied on the x values, and the image is created, blurred, colored, clipped, and rotated.
Point Cloud
The point cloud is calculated from information available in Gazebo's RaySensor interface, https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1sensors_1_1RaySensor.html, see file ImagingSonar.cc
at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/master/gtri_based_fls_gazebo/src/ImagingSonar.cc.
This point cloud is identical to the point cloud the gazebo_ros_block_laser ROS plugin makes except that the ROS one has an optional parameter for adding gaussian noise.
Here is the basic flow:
Initialization
Function load
reads the requested topic names and sets up the Sonar so it doesn't run unless there are topic listeners. It also sets Gazebo's RaySensor
to get callbacks to function OnNewLaserScans
from Gazebo's MultiRayShape
class for processing scans during runtime.
Runtime
-
Function
OnNewLaserScans
takes a timestamp and, if not running behind, callsPutLaserData
. -
Using angle, ray, and range information in
RaySensor
, functionPutLaserData
performs nested iterations over the 2-dimensional array ofVerticalRangeCount
byRangeCount
. For each, it computes a cloud datapoint using math on range (length of ray) and retro (intensity of ray) values and pushes the xyz point onto thepoints
array:for j in vertical range count: for i in range count: Calculate 4 index values for 4 corners given (i,j). Obtain average range and intensity values given 4 index values. Calculate xyz from (i,j) through (pAngle polar, yAngle azimuthal) to xyz. Record xyz and intensity into point cloud: xyz into points[], intensity into values[].
Key interfaces in this loop are
GetRange
andGetRetro
fromphysics::MultiRayShape
, https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1physics_1_1MultiRayShape.html:double GetRange (unsigned int _index)
double GetRetro (unsigned int _index)
To work with the
RaySensor
, we need to understand the Gazebo RaySensor class (https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1sensors_1_1RaySensor.html#a47aa298aceeb6084459df00a0d610ba6) and how to configure the shape of the ray using the MultiRayShape class (https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1physics_1_1MultiRayShape.html). The ray shape parameters are adjustable using SDF tags (http://sdformat.org/spec?ver=1.6&elem=sensor#sensor_ray). For discussion see https://answers.gazebosim.org//question/20930/where-is-the-lasershape-in-the-ray-sensor-defined/.
Sonar Image
The Sonar image is calculated from the point cloud, see file cloud_to_image.cpp
at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/master/gtri_based_fls_gazebo/src/cloud_to_image.cpp. File ColorMaps.cpp
maps the Sonar image to its rendered colors.
Initialization
Function init_cloud_to_image
records the angle the image is calculated across and the Sonar's topic name for use during runtime.
Runtime
Function publish_cloud_to_image
projects the point cloud onto a camera image and then publishes the camera image. Here are the transforms:
- The point cloud is transformed to the location of the Sonar sensor using the
tf::TransformListener
transform. - The x value of each point is randomized by adding a random Gaussian distribution that has a mean of 0 and std of 0.01.
- A Blank
cv::Mat
imageimg
is created, size 600x600 and holding 8-bit unsigned integers. - Each point in the point cloud is cast onto image
img
by drawing a circle of radius 1 with the color being the retro value from 0 to 255. - Apply a median blur over
img
using a kernel size of 3 (for each point, use the middle value of the 3x3 square around it). - Create new image
img_color
fromimg
, with the same size but holding three 8-bit unsigned colors for Blue-Green-Red, substituting each retro value from 0 to 255 to a color based on coloring algorithmgetColor_matlab
defined in filecolorMaps.cpp
. - Fill the edges black.
- Rotate the image 180 degrees using
cv::warpAffine
. - Create the ROS
sensor_msgs/Image
message from theimg_color
image usingcv_bridge::CvImage
. - Publish the message.
IVision
The https://github.com/IvisionLab/sonar-simulation repository requires Ruby and QT4. The install fails with a syntax error under bootstrap.sh.