Lab 2: Mapping and Localization - GIXLabs/TECHIN516 GitHub Wiki
1. Teleoperation
Every robot you interact with should have functions for manually controlling it known as "teleoperation". These functions are essential for prototyping autonomous functions, testing features, and intervening when the robot doesn't behave as anticipated. Depending on the form-factor, teleoperation can be mapped to a video game controller, VR controllers, or a simple terminal script like the one we will use for the Turtlebots.
-
Launch a Gazebo simulation to instantiate a Turtlebot to teleoperate:
ros2 launch turtlebot3_gazebo turtlebot3_world.launch.py
-
Open a new terminal and start Rviz to inspect visual information from the Gazebo Turtlebot. You will need to add the robot model and change the frame to "odom".
ros2 run rviz2 rviz2
-
Open another terminal and run the following command to start teleoperating the robot:
ros2 run turtlebot3_teleop teleop_keyboard
-
You can inspect what is happening inside of ROS by checking different topics. There are many tools for inspecting topics like Rviz, rqt, and terminal commands like
ros2 topic <list / echo / hz / info / etc.>
Deliverable 1.1: Describe what happens when you execute the following command:
ros2 topic echo /cmd_vel
-
Move around the robot and observe how each key changes the velocity.
-
Echoing topics is a great way to understand what is happening at a low level within the system. Rqt is a better tool for understanding the system as a whole. Launch rqt and bring up the node graph. In the node graph, in the top-left drop-down menu choose "Nodes/Topics (all)".
rqt
Deliverables
1.2: Take a screenshot of the graph and include it in your lab report.
1.3: Describe the graph.
2. Creating a Map
A key function of mobile robots is navigating environments to perform the tasks they're designed for. It is much easier to navigate environments if the robot has a map to work off of. A map can be created using the sensor data of the robot. For example, a robot can estimate its own position by reading from odometry. At the same time the robot try to understand the positions of surrounding physical objects with data from sensors (for example, laser sensor or camera). These information allows robots to construct maps of the environment when the robots are moving around. The process is called Simultaneously Localization and Mapping (SLAM).
-
With Gazebo, teleop, and rqt still running, execute the following command to begin mapping the simulated environment:
ros2 launch turtlebot3_cartographer cartographer.launch.py
Deliverables
2.1: Take a screenshot of the updated rqt graph (you may need to refresh the graph).
2.2: Compare your 2 node graph screenshots, describe what has changed and why.
-
Teleop the robot around the room until the map in Rviz is completely filled in. When you have finished mapping the environment, save the map by running the following command:
ros2 run nav2_map_server map_saver_cli -f ~/map
Note: this command will save the map to your home directory.
Deliverables
2.3: If there some holes between the gray and white parts of your map, can you state the reason? And how can you redo the map to have solid, thick, black lines? Note: You may have to run the mapping again and revisit those trouble areas with the teleoperation program.
2.4: Take a screenshot of your final map.
2.5: What is the 'occupied threshold' value in the
map.yaml
?2.6: What is the 'free threshold' value?
3. Navigating the Map
Robot navigation means the robot's ability to determine its own position in its frame of reference and then to plan a path towards some goal location. It means that once the robot gets a map, it need to establish its own position and orientation within the map (self-localization) in order to successfully navigate in the map.
In this subsection we will address the localization estimation and how to navigate towards a goal in the map. Notice that this launch file requires a map file as the argument. We will provide the map that we just created.
The map that you created with cartographer is a static map. This means that the map won't be changed once you created it; it captures the environment as it is at the exact moment that the mapping process is being performed. During this assignment, we will also explore what happens when the environment changes (by adding an obstacle) and how the navigation process needs to adapt its planning process dynamically.
-
With Gazebo and the teleoperation program still running, launch the navigation stack:
ros2 launch turtlebot3_navigation2 navigation2.launch.py map:=$HOME/map.yaml
-
You will need to provide an initial estimate of where the robot is on the map. On the top menu bar of Rviz, select "2D Pose Estimate". Click and drag the 2D estimate to the rough position and orientation of the robot in Gazebo. You should see many small green arrows around the robot representing guesses about the robot's pose.
-
Use the teleop script to drive the robot around the Gazebo world. You should notice the green arrows converge on the robot's actual position the more you drive.
Deliverables
3.1: Take a screenshot when there is good alignment between the green arrow pose guesses and the robot.
3.2: Discuss why the green arrows behave this way.
-
Another feature in the RViz GUI, is the option to send Navigation goals to the robot. You can command the robot by pressing the button "Nav2 Goal" and clicking on the map where an arrow will appear and release the mouse to provide the orientation. You can observe the planned trajectory and the robot moving towards this new location.
Deliverables
3.3: Describe the robot behaviors from the moment a navigation goal is given to the moment when the robot reaches its destination. Did the robot find any difficulties planning the route?
3.4: Check the map.yaml file for costmaps thresholds (free/occupied threshold) and try to understand them and grid occupancy. Determine if the robot goes closer or farther away from the map edges as it plans its movements.
3.5: What do the colors(red/blue) in the costmaps mean?
-
Now add an obstacle (insert the pillar you previously created) inside of Gazebo. You now have both static obstacles and a dynamic (new) obstacle that isn't part of your map.
Deliverables
3.6: Describe the robot behaviors from the moment a navigation goal is given to the moment when the robot reaches its destination. Did the robot find any difficulties planning the route?
3.7: Compare the local costmap/global costmap to the static map in RViz. What happens to the laser readings that you visualize?
3.8: Take a snapshot of the resulting rqt node graph when running the navigation portion, and highlight the nodes actively participating in moving the robot from point A to B. Comment on the type of messages being shared on which topics and the direction of communication (who is a publisher and who is a subscriber to which topic).
3.9: Upload a video to your Google Drive that shows the robot navigating to a new location using "Nav2 Goal" without the pillar. Include a link to the video in your lab report
3.10: Upload a video that shows the robot navigating to a new location using 2DNav Goal that avoids the pillar that you inserted.