pointcloud_map_from_rosbag - AD-EYE/AD-EYE_Core GitHub Wiki
Autoware gives the node ndt_mapping, which can be used in order to create point cloud maps from rosbags.
In the Autoware manager, computing tab, then Localization and lidar_localize section. Click on app to tweak parameters.
We can chose to use the IMU and/or the odometry data in order to improve results.
A video showing steps to do can be found here : https://www.youtube.com/watch?v=ss6Blrz23h8
Basically, here are the steps:
-
First, go to Simulation tab and select the rosbag file.
-
Click Play and then Pause to set rosparam use_sim_time to true.
-
Go to Setup tab, input parameters related to relative position between the vehicle and localizer, push TF and push Vehicle Model (if you leave the space blank, the default model will be loaded.).
-
On computing, select ndt_mapping (and eventually, tweak the parameters clicking on the app button).
-
ndt_mapping will read from /points_raw
If the pointcloud is being published in a different topic, use the relay tool in a terminal window:
rosrun topic_tools relay /front/velodyne_points /points_raw
-
You can visualize the mapping process with rviz. A configuration file can be found in
<Autoware>/src/.config/rviz/ndt_mapping.rviz
-
After finishing the mapping, click the app button of ndt_mapping and input the name of the PCD file and filter resolution, and click PCD OUTOUT. The PCD file will be output in the directory you specified.
Some descriptions are given in tooltips by hovering the mouse over the parameter label.
Here are some extra informations about some parameters:
-
Resolution
:
Too low and it just do not work at all, to high and some tilt errors occurs. -
Step size
:
When increasing the value, the accuracy is reduced (especially when the car turns). -
Transformation Epsilon
:
Looks like a tolerance value. -
Maximum iterations
:
It could be good to increase this value but regarding the output console, the number of iterations is rarely above 10 and the algorithm seems to converge each time. -
Leaf Size
:
Increasing this value gives better results: lines appears well aligned (windows edges and railing). But it also increase the computation time. -
Minimum Scan Range
:
Useful in order to remove the car footprint. -
Maximum Scan Range
:
A error in rotation makes far edges much more misaligned than close ones. So it is better not to have a too high value, especially if the car run everywhere. Also, having long lines may helps to have a correct alignment of the point clouds. -
Minimum Add Scan Shift
:
Depends on the point cloud density that we want. -
Method Type
:- We always used the generic method which works well and is reliable.
- The Anh method (name of its creator) appears to be more memory consuming.
- The openmp method (which may be a multi-threading implementation to the generic one) simply do not work properly.
Finally, values on the image above are the ones that gives really good results after tests. Anyway, most of the parameters do not change the results a lot when they are different.
If you want to merge multiple maps provided by multiple mappings, you will have to do it by hand (at the moment, nothing else has worked).
One way to do it is to use rviz :
We want to load two point cloud maps in two different frames and use the static_transform_publisher to move one map against the other.
So, we modified the code of the point_map_loader from Autoware (Autoware/ros/src/data/packages/map_file/nodes/points_map_loader) in order
to publish the map in another frame.
You can download the modified node here.\
We use the Autoware Map tab to load the first map (it will be published in the topic points_map and frame map).
Then we use our modified point cloud loader to published the second map in the frame map2.
We run it with the command
rosrun pcloader pcloader <topic> <frame_id> <pcd files> ...
Now, the painful part begins, we use the static_transform_publisher to move the second map by hand.
static_transform_publisher x y z yaw pitch roll map map2
When you have found the perfect transformation for your second map, you can apply it to the point cloud (with Matlab for example).
The first thing we try was to create the map with Matlab. Finally, the maps provided by Autoware are way better but here is the methods with Matlab. It will give steps to do and functions to use to manipulate point clouds in Matlab.
We based our work on a Matlab tutorial which explains how to proceed : https://www.mathworks.com/help/driving/examples/build-a-map-from-lidar-data.html
Also, another Matlab page explains how to deal with rosbags : https://www.mathworks.com/help/ros/ug/work-with-rosbag-logfiles.html
Then, we built our own script in order to create maps.
Here are the different steps to consider :
- Process the next point cloud to merge with the map.
This step consists in removing the car footprint, the ground and eventually remove some noise.
- user-defined function
processPointCloud()
- Downsample the next point cloud.
This step is necessary to reduce the computation time and improve accuracy.
- Matlab built-in function
pcdownsample()
- Register the next point cloud against the map.
It is in this step that the NDT algorithm is used. The registration process gives a transformation to apply to the next point cloud in order to place it properly inside the map.
- Matlab built-in function
pcregisterndt()
for registering - Matlab built-in function
pctransform()
to apply the transformation
- Merge the point cloud with the map.
- Matlab built-in function
pcmerge()
This is the main loop to create a map, some additional things are done in order to improve the result.
First, the point cloud data are really large, so we proceed only a part of the rosbag at a time.
The main improvement consists in the estimation of the transformation. We can estimate a transformation in order to help the algorithm and improve the results.
-
The first approach consists in assuming the car has a constant speed so the next transformation can be estimated as the same as the previous one.
-
Then, we can use some sensors data (in this case the IMU) to estimate the rotational part of the transformation.
This second point is a bit unclear in the tutorial. In our case, we consider the first point cloud as the reference, so every other point clouds needs to be oriented against the first one. So, we use the IMU information of the first point cloud against the information of the current point cloud. Then we have an estimation of the rotational transformation to apply.
In the tutorial, the initial transformation is given as an argument of the registration function. The problem is that, the registration
function rely too much on that initial transformation and do not change a lot from it.
So, we are doing things a bit differently :
We apply the rotational transformation (given by the IMU data) to the next point cloud before the registration. The initial transformation
given to the registration function consists only on the previous translation part. Using this method, we get the best results as we can with
Matlab.