Lab 1: Sensors in ROS - GIXLabs/TECHIN516 GitHub Wiki
1. turtlebot_ws
As you learned about in the ROS tutorials last week, ROS packages are organized into different "workspaces". At the lowest level, there is the base ROS installation located in your /opt/ros/humble/
directory. Workspaces extend this code base to include custom packages in a controlled manner. It is generally best practice to isolate the packages needed by different platforms to avoid conflicting versions of libraries or experiment with changes. ROS 2 has advanced tools for sourcing many different workspaces as "overlays", but we're going to start simple with a single workspace dedicated to our turtlebots.
-
Open the class repo to the
turtlebot_ws
subdirectory. Open the fileinstall_turtlebot_dependencies.sh
and review the packages that will be downloaded for use by the turtlebots. -
Run the script to install the packages with:
./install_turtlebot_dependencies.sh
Many of the packages in the .sh file were installed with
sudo apt install
, this will install them to the "global" base ROS workspace along side the base installation. This is generally OK for code that will not be modified. Code that you will edit should be stored in a workspace inside of your home directory so that your user is allowed to edit the files. -
Every terminal you use needs to be directed which ROS workspaces to reference. This is accomplished with the
source
command. The.bashrc
file (located as a hidden file in your home directory) is run every time you open a new terminal. Therefore, you can add your source commands to the.bashrc
file to avoid typing them in every time. The following lines append the commands into your.bashrc
from the terminal:echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
echo "source ~/TECHIN516/turtlebot_ws/install/setup.bash" >> ~/.bashrc
echo "export TURTLEBOT3_MODEL=burger" >> ~/.bashrc
Note: you will need to change the turtlebot3_model to "waffle" if you ever use the other model available in the lab.
2. Intro to Simulations
Working in simulations is a very common way to prototype robotic applications. Simulations allow you to play out a huge array of scenarios without needing to organize them in the real world. This allows for iterative testing in potentially dangerous situations like autonomous navigation and human-robotic interactions. Simulations allow you to generate huge amounts of data quickly for reinforcement learning and AI applications. It also allows you to focus on software and controls without worrying about the underlying hardware. This best practice allows teams to work in parallel on the many facets of robotic applications. Common simulation platforms include Unity, Unreal Engine, Nvidia Omniverse, and Gazebo. Gazebo is the default simulation environment for ROS so that is what we will begin with. Many of the same lessons apply to the other platforms.
-
Open an example Gazebo world with:
ros2 launch gazebo_ros gazebo.launch.py world:=worlds/cafe.world
-
Take some time to familiarize yourself with the UI and features. You can add additional objects and shapes into your simulated environment. On the top of the right panel, click on the cylinder shape to add a pillar object into the world. When you right-click on the object you can modify its properties, like dimension and the collision area. Select edit model, the open link inspector. In both the visual and collision tabs, add the following dimensions in the Geometry section: radius = 0.25m, length = 1m. An illustration is shown below:
- Once you have created your pillar model, you can save it. This will allow you to insert the object with its customized properties in new worlds. If you open this folder you will see a new folder called like the object you created, and it will contain two files: model.config and model.sdf
Deliverable 2.1: Take a screenshot of the modified cafe.world with your pillar in it.
3. Turtlebot Simulation
-
Close the previous Gazebo world and open an empty one with:
ros2 launch turtlebot3_gazebo empty_world.launch.py
-
Open a new terminal and try to visualize the robot and its sensors in RViz. RViz is not a simulation environment. It is a tool for interacting with and inspecting visualizing information from different topics and parameters. For example, it can be used to visualize sensor data, robot body positions, planned paths, etc.
ros2 run rviz2 rviz2
-
Information has to be added manually to the default RViz configuration. Click on the add button, select "RobotModel" and add it to the RViz panel. Click on the drop down arrow next to "RobotModel" and find the "Description Topic" sub field. Click to the right of that field to select the ROS topic to visualize data from. Finally set the "Fixed Frame" of RViz to "base_link" in order to set the transformation relationship between the robot's body and the RViz grid.
Note: At this point, you will find errors related to failure to map from odom to base_link, wheel_right, wheel_left. If you take a closer look at the launch file for the gazebo world, and inspect what nodes are brought up by it, you will realize that the robot that has been spawned is only the footprint of one. To interact with variables like velocity of sensor information, we need to bring up all the necessary topics and nodes through the robot_state_publisher.
-
In another terminal, type the following command to open the rqt which is another GUI for inspecting various aspects of ROS. There are many plugins available (and you are more than welcome to make new ones yourself). Select "Plugins" from the top menu bar options, then select "Visualization" and "TF Tree". You will notice there is only a transformation between the /odom reference frame and the /base_footprint. The latter represents the point on the floor where the robot is supposed to occupy.
rqt
-
In a new terminal you will bring up a full robot model with all its transformations.
ros2 launch turtlebot3_bringup turtlebot3_state_publisher.launch.py
After that, you can retry viewing the tf_tree and expect to see the following:
Now that the robot is fully brought up and functioning in simulation, we will go through the examples available with the turtlebot3 package.
Deliverable 3.1: include screenshots of your rqt graph and the turtlebot inside of RViz.
4. Turtlebot Example
In the same gazebo world, run the following examples and record your answers to the deliverable questions.
4.1 Position Control
-
With the Gazebo simulation running, open a new terminal and run the following example:
ros2 run turtlebot3_example turtlebot3_position_control
Tip: you can reset the position of the robot in Gazebo under Edit -> Reset Model Pose or Cntrl-Shift-r.
-
Instruct the robot to move to (x: 1, y: 1.5, w: -90).
Deliverable 4.1.1: Describe how the robot is reaching the goal. Does it match position and orientation at the same time or does it move to each in a sequence?
-
Edit the initial position of the robot to be different from the origin. Run the same command as before (x: 1, y: 1.5, w: -90).
Deliverables
4.1.2: Describe the motion of the robot with respect to its reference frame and the Gazebo world reference frame.
4.1.3: Determine the transformation between the Gazebo origin and the robot's new position. How would the instruction look (x, y, w) if it was from the robot's own reference frame.
4.1.4: Discuss the relevance of absolute and relative reference frames.
-
Keep the robot moving to inspect what is happening while in motion to answer the following questions:
Deliverables
4.1.5: Take a screenshot of the rqt Node Graph with all nodes displayed.
4.1.6: Which node is responsible for moving the robot?
4.1.7: What information is being sent to that node to tell it how to move?
4.1.8: What information is needed for translation versus rotation?
4.1.9: Take a screenshot of a terminal echoing the topic used to move the robot.
4.2 Patrol
-
With Gazebo running, open 2 new terminals and run the patrol example client and server:
ros2 run turtlebot3_example turtlebot3_patrol_server
ros2 run turtlebot3_example turtlebot3_patrol_client
-
Try a handful of different movements to understand how this example works.
Deliverables
4.2.1: Inspect the code for the server node. What are the topics used in the subscriber? What is the message type?
4.2.2: How does the server use those messages?
4.2.3: Describe how data from the sensors are used?
5. Plotting Turtlebot Sensor Data
We will now begin using sensor data in code. All code should be located in ROS packages to properly interact with the rest of the system.
-
Begin by creating a new python package to host your code:
ros2 pkg create --build-type ament_python t516_lab1
-
Copy the example code from this repo to the t516_lab1 subdirectory of your new package.
-
Add the python scripts as entry points within the
setup.py
file of the new package. -
Fill in all "TODO" lines of code to plot
/odom
data using matplotlib. -
Plot the odometry of the robot as you run the patrol example again with a radius of 1.
Deliverables
5.1: Describe the behavior of the robot in terms of inaccuracies associated with odomtry. What do you observe about the ending position and orientation of the robot?
5.2: How does the shape traversed compare to the original input instructions?
5.3: Take a screenshot of your plot.
-
Record a rosbag to record the odometry data while you run the patrol example 5 more times with a radius of 1.
Deliverable 5.4: What command did you use to save the rosbag?
-
Shut down Gazebo and use your rosbag to plot odometry.
Deliverable 5.5: Take a screenshot of your plot.