Lab 3: Physical Implementation and Introduction to Control - GIXLabs/TECHIN516 GitHub Wiki

1. Connecting to the Physical Robot

Now that you have worked with the robot in simulation and understand how to teleop it, you are now ready to work with the real physical robot! There are a couple additional steps required for ROS to work across different computers. Follow the steps below to control the Turtlebot's Raspberry Pi from your laptop:

  1. Your laptop and the robot need to be on the same network. This should not be a problem here as both should be on either the UW or UW MPSK networks (which can communicate with each other).

  2. SSH into the robot. "SSH" stands for "Secure SHell protocol" and is a great tools for securely connecting to other computers to sign in and execute terminal commands. Connect using the following command:

    ssh <username>@<ip_address>

    Note: this information should be present on the robot.

    You will be prompted to sign in using the remote username's password. There are many tools to manage many terminals remotely and save your session when you disconnect including tmux and VS Code's remote explorer extension. Neither will be taught in class, but will make using robots easier moving forward.

  3. All terminal sessions should use the same ROS_DOMAIN_ID environment variable. This can be easily accomplished by adding the variable in your .bashrc.

    export ROS_DOMAIN_ID=30

    Note: this variable will not carry across terminal sessions if you only paste it into the terminal. Use your favorite editor to edit your .bashrc file.

    Normally, this would be enough for ROS on different computers to communicate. In this case, everyone could use a different ID to ensure that your ROS nodes do not conflict with each other.

    However, due to UW's network firewalls, ROS nodes will still not be able to find each other using this strategy. For this reason, we will set all IDs to 30, and follow the next step to communicate with the robot.

  4. You will need to run a "fast DDS discovery server" for ROS nodes to be able to connect across devices. You can learn more about discovery servers in the official documentation. If you don't understand everything there, don't worry, all you need to do is run the following commands:

    You will need to know the IP address of your laptop. Check the IP with:

    ifconfig -a

    The IP address should be under the "wlp" section. Write this number down for reference later.

    On you laptop, start the discovery server with:

    fastdds discovery --server-id 0

    Now, all the terminal sessions you want to communicate will need to reference that discovery server. Add the following line to the ~/.bashrc of your computer and the robot's:

    export ROS_DISCOVERY_SERVER=<ip_of_your_laptop>:11811

    Note: the robot will probably already have this line instructing it to connect to someone else's computer. Overwrite this line with your own to avoid conflicts.

  5. ROS nodes should now be able to discover each other across devices. Test this by echoing a topic before proceeding with more complex commands.

  6. In order for the Turtlebot to receive and interpret commands, you must start the bringup launch file on the Turtlebot:

    ros2 launch turtlebot3_bringup robot.launch.py

2. Plotting Real-World Odometry

Now you can reuse the plotting function you wrote in lab 1 to track odometry readings of the real robot. All the code below should be run on your laptop, the Turtlebot's bringup script will subscribe to the relevant topics and drive the robot accordingly

  1. Run the odometry plotting script you wrote for lab 1.

  2. Run the Turtlebot's example patrol client and server. Send the robot to drive in a 1m circle.

    Note: make sure there is enough room around the robot so that it doesn't hit anything as it drives.

    Deliverables

    2.1: Attach a screenshot of the resulting odometry plot.

    2.2: Make notes about the robot's behavior, does it return to the same position?

    2.3: What can be done to improve the accuracy of the sensor data to get a better estimation of the robot's position and orientation?

3. Real-World Mapping

In this section, we will go through the mapping procedure followed in lab 2, and store the /scan topic data into a rosbag. We will compare the mapping results in the real world against the ones obtained in the simulation.

  1. Make sure the physical turtlebot is operational and placed correctly, in the environment you want to map. Move the robot to the right bottom corner of the maze.

  2. Additionally, before you start teleoperating the robot, record a rosbag file with the following topics:

    rosbag record /odom /scan /cmd_vel /tf -O physicaltb3_map.bag
  3. When you have finished tracing the surroundings to complete the map, you can stop the bag file recording and save the resulting map.

    Deliverable 3.1: Inspect the resulting files of the maps: the one obtained in the simulation and the one from the physical implementation. Attach screenshots of both pgm files. Mention differences between the results obtained in the simulation and the real world in terms of: thickness of the edges, additional shapes outside of the intended map area, differences in parameter values found in the .yaml file.

  4. Use that map you just created and launch the navigation example in Lab2. Return the robot to the start position on the physical map. You may want to use that spot to give a 2DPoseEstimate as your starting position. Set a Nav2 Goal on the map that is close to this area.

    Deliverables

    3.2: What happens when a 2D navigation goal is provided? How long (time) does it take for your robot to plan and reach the goal?

    3.3: How close did the robot get to the goal (distance from the goal)?

    3.4: Discuss your observations. These should include quantitative and qualitative components. You may rely on comparisons based on the concentric rings shown around the goal, or you may reference your comparisons to the robot's interpretation of the world.

    3.5: Upload your rosbag to your Google Drive and include a link to download it in your lab report.

4. Intro to Control Strategies

In this section, we will explore different paradigms to control the movements of a robotic platform. In this case, we will focus on the changes in position and orientation of a wheeled robot. For convenience, we will return to the simulation setting.

4.1 Open-Loop Control

Open-loop refers to the robot's operation created by the input signal does not depend on the system's output. Specifically, we will create commands for the robot to move forward 1.5 meters.

  1. Launch a simulated turtlebot3 in Gazebo:

    ros2 launch turtlebot3_gazebo turtlebot3_world.launch.py
  2. By knowing the distance to be traveled, we can determine the constant speed and the amount of time that constant speed needs to be held to cover said distance. Use the command-tool ros2 topic to publish velocity commands to the robot:

    ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: <value>, y: 0.0, z: 0.0}, angular: {x: 0.0, y: 0.0, z: 0.0}}"

    After a certain amount of time (pos = vel * time), you will publish a twist message to effectively stop the robot from moving:

    ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.0, y: 0.0, z: 0.0}, angular: {x: 0.0, y: 0.0, z: 0.0}}"

    For your convenience, we have prepared a simple python script that submits the commands to the terminal by using the os library. You will need to complete your selected values for time and speed.

    Note: you do not need to run this code in the context of a ROS package because it is publishing commands similar to how you would type them into the terminal yourself.

    Deliverables

    4.1.1: Report the chosen strategy (published values) to complete the task of moving 1.5m forward in open-loop.

    4.1.2: Inspect the Turtlebot3's position in Gazebo. (World -> Models -> turtlebot3_burger -> pose -> x). Did the robot move 1.5m accurately?

    4.1.3: Compare your desired traveled value of 1.5m with the actual position of the robot in gazebo, and the reported value in the /odom topic (use ros2 topic echo /odom). Discuss differences.

    4.1.4: Discuss the challenges of operating the robot in open-loop, particularly when the motions increase in complexity.

    4.1.5: How would the sequence of commands look if you wanted to complete a patrol pattern (e.g. a triangle) in open-loop? Report the solution in pseudocode, commands should include GoForward(distance), WaitTime(time), Rotate(angle in degrees).

  3. Modify the given open loop file to achieve this patrol motion.

    Deliverable 4.1.7: How close to the start location did your robot finish? Use values from both: odometry and gazebo pose.

4.2 Closed-Loop Control

In closed-loop operation, the robot's input signal depends on a reference value that we want the output to be, and the comparison against the robot's current output through sensor feedback. We will create commands for the robot to move forward 1.5 meters by using the readings of some of its sensors, namely odometry and laser.

  1. Create a node that subscribes to the /odom topic and publishes to the /cmd_vel topic. It will read the initial /odom value and only stop publishing commands to the velocity topic when the current value increases by 1.5m. Use the file close_loop_odom.py as a starting point.

    Note: this code will need to operate within the ROS system to subscribe and publish to topics. Create a new package to host this code.

  2. Create a node that subscribes to the /scan topic and publishes to the /cmd_vel topic. It will read the initial /scan value corresponding to the front of the robot and only stop publishing commands to the velocity topic when the current scan value decreases by 1.5m. Use the file close_loop_laser.py as a starting point.

  3. Run your closed-loop nodes to move the robot 1.5 meters forward and compare the performance of using /odom vs. /scan. Also check the robot's position in Gazebo through the plotting utility and compare to answer following questions:

    Deliverables:

    4.2.1: How do each closed-loop control compare to the open-loop performance?

    4.2.2: Record a video of the robot's performance for each closed-loop control node. Attach your video links.

⚠️ **GitHub.com Fallback** ⚠️