Lab 26: Advanced interactive markers - robotic-picker-sp22/fetch-picker GitHub Wiki
Next you will develop two RViz-based robot teleoperation interfaces to control the robot's arms using what you learned in Labs 19-25.
Gripper Teleoperation Interface
The first interface involves an InteractiveMarker, shaped like the robot's gripper, which can be moved around to specify a target gripper pose and send the robot to that pose. When moved around, the marker should change color to reflect whether the corresponding gripper pose is reachable by the robot or not.
Here is how such an interface might look like in simulation and on the real robot (click to open video):
Target Teleoperation Interface
The second interface involves an InteractiveMarker corresponding to a world object, as well as gripper poses relative to this object, which can all be moved around. You will use transform arithmetic to compute "pre-grasp", "grasp", and "lift" poses for the gripper relative to the current pose of the object. Make sure the visualization indicates whether the displayed object pose allows all gripper poses associated with it to be reachable or not.
Here is how such an interface might look like in simulation and on the real robot (click to open video):
To build these interfaces, you will need to create an interactive marker of a Fetch gripper. The marker needs to have a clickable menu to go to the gripper pose, open the gripper, or close the gripper. You also need to change the color of the gripper depending on whether an IK solution was found for that pose. This lab goes over how to accomplish some of these tasks.
IKFast
Make sure you have completed the previous lab on getting IKFast working. Otherwise, your feedback callbacks will take a long time to run.
General structure
As usual, we recommend wrapping your code in a class:
class GripperTeleop(object):
def __init__(self, arm, gripper, im_server):
self._arm = arm
self._gripper = gripper
self._im_server = im_server
def start(self):
# gripper_im = InteractiveMarker() ...
self._im_server.insert(gripper_im, feedback_cb=self.handle_feedback)
def handle_feedback(self, feedback):
pass
class AutoPickTeleop(object):
def __init__(self, arm, gripper, im_server):
self._arm = arm
self._gripper = gripper
self._im_server = im_server
def start(self):
# obj_im = InteractiveMarker() ...
self._im_server.insert(obj_im, feedback_cb=self.handle_feedback)
def handle_feedback(self, feedback):
pass
def main():
...
im_server = InteractiveMarkerServer('gripper_im_server')
auto_pick_im_server = InteractiveMarkerServer('auto_pick_im_server')
teleop = GripperTeleop(arm, gripper, im_server)
auto_pick = AutoPickTeleop(arm, gripper, auto_pick_im_server)
teleop.start()
auto_pick.start()
rospy.spin()
How to make a gripper marker
You can make a marker out of a mesh. See rviz/DisplayTypes/Mesh.
Here are the package URIs for the gripper meshes:
GRIPPER_MESH = 'package://fetch_description/meshes/gripper_link.dae'
L_FINGER_MESH = 'package://fetch_description/meshes/l_gripper_finger_link.STL'
R_FINGER_MESH = 'package://fetch_description/meshes/r_gripper_finger_link.STL'
You will need to create 3 markers: one for the gripper and two for the fingertips.
These markers will be added to a single InteractiveMarkerControl
, which in turn is added to your InteractiveMarker
.
You should create a function that, given a PoseStamped
, returns either an InteractiveMarker
or a list of 3 Markers.
See what the marker looks like when you place it at 0, 0, 0, in the base_link
frame.
You can use the grid lines to get a sense of how the meshes are laid out (each grid square in RViz is 1 meter by 1 meter).
You will need to make some adjustments to the fingertip positions.
What is the end-effector link?
To our eyes, gripper_link
is the most intuitive end-effector link.
However, MoveIt is actually configured with wrist_roll_link
as the end-effector.
You can find the offset between wrist_roll_link
and gripper_link
using:
rosrun tf tf_echo wrist_roll_link gripper_link
Initially, your gripper marker, when placed at (0, 0, 0) will look like this:
This is because the meshes are defined in the gripper_link
frame.
However, you actually want the marker to be centered on the wrist_roll_link
, like below.
Use the offset above to correct this.
Creating 6 DOF controls
Follow the Interactive Markers: Basic Controls tutorial.
That tutorial is written in C++ and does some unnecessary stuff for illustrative purposes, so try to extract the essence of the tutorial instead of following it directly.
To create the arrows and rings around the marker, you will need to add 6 different controls to your interactive marker, as shown in the tutorial.
Once you are done, the arrows and rings will probably be ginormous, so scale them down using the scale
field of the InteractiveMarker
.
You will need to add 6 DOF markers to many grippers, so we recommend creating a function that returns a list of InteractiveMarkerControl
.
You can then append these controls to an interactive marker:
controls = make_6dof_controls()
interactive_marker.controls.extend(controls)
If you are following the C++ tutorial and for some reason, only one of the 6 DOF controls is showing up, then it might be because you are forgetting to use copy.deepcopy
:
control = InteractiveMarkerControl()
control.name = 'move_x'
controls.append(control)
control.name = 'rotate_x'
controls.append(control)
# Oops, control is the same object in both cases. You need to make a copy with copy.deepcopy(control).
Menu items
An InteractiveMarker
msg contains a list of MenuEntry
msgs.
According to the MenuEntry
documentation, you just assign a non-zero ID to each menu entry, and set its command_type
to FEEDBACK
.
Then, in your feedback callback, you can check if the event_type
is MENU_SELECT
and if so, use menu_entry_id
to figure out which menu item was clicked.
Finally, you will need to set the interaction_mode
of the control that holds your gripper markers to MENU
.
Handling drag events
If your gripper is disappearing when you drag the 6 DOF controls, you need to set always_visible
on the control that holds the gripper.
To check IK on drag events, check if the event_type
is POSE_UPDATE
in your feedback callback.
Changing the gripper color
This is easy.
Just get your InteractiveMarker
, iterate through the list of markers that comprise your gripper visualization, and change their colors individually.
Then, reinsert your InteractiveMarker
to your interactive marker server and call applyChanges()
.
Running on the real robot
Unfortunately there is a message mismatch between the moveit messages on noetic and melodic so we will have to install moveit_msgs manually:
cd fetch-picker
git submodule add https://github.com/ros-planning/moveit_msgs.git
cd moveit_msgs
git checkout melodic-devel
Update the CMakeLists.txt
file in the applications/
subdirectory to contain the following:
find_package(
catkin REQUIRED COMPONENTS
roscpp
moveit_msgs
rospy
)
catkin_package(
CATKIN_DEPENDS moveit_msgs
)
include_directories(
include ${moveit_msgs_INCLUDE_DIRS}
${catkin_INCLUDE_DIRS}
)
Update the package.xml
file in the applications/
subdirectory to contain the following:
<build_depend>moveit_msgs</build_depend>
<exec_depend>moveit_msgs</exec_depend>
Then run catkin build
and you will be able to run moveit on fetch. Make sure that you can control the robot to perform a pick up in simulation with the blue cube, before trying it on the real robot.
The robot's wireless connection is very slow for some reason. While we address this issue, you will need to take the following steps:
- Update your code
- Add the Lab 26 obstacles file to your repository
- Ship your code to the robot
- Run MoveIt on the robot
- Add the obstacles and position the robot
- Run your gripper teleop node
- Execute actions carefully
- Run-stop the robot
1. Update your code
When you create your interactive marker servers, set the q_size
arg to a small number like 2:
im_server = InteractiveMarkerServer('gripper_im_server', q_size=2)
auto_pick_server = InteractiveMarkerServer('auto_pick_im_server', q_size=2)
2. Obstacles file
You must run this code to add the table into the PlanningScene.
Add this code to your repository in applications/scripts/lab26_obstacles.py
. Replace the size/position of the table with the dimensions and position of the table you will be working with.
Mark it as executable and push it to Github.
#! /usr/bin/env python3
from moveit_python import PlanningSceneInterface
import robot_api
import rospy
def wait_for_time():
"""Wait for simulated time to begin.
"""
while rospy.Time().now().to_sec() == 0:
pass
def print_usage():
print 'Usage: rosrun applications lab26_obstacles.py'
print 'Drive the robot until the PlanningScene lines up with the point cloud.'
def main():
rospy.init_node('lab26_obstacles')
wait_for_time()
planning_scene = PlanningSceneInterface('base_link')
planning_scene.clear()
planning_scene.removeCollisionObject('table')
planning_scene.removeCollisionObject('floor')
planning_scene.addBox('floor', 2, 2, 0.01, 0, 0, 0.01/2)
planning_scene.addBox('table', 0.5, 1, 0.72, 1, 0, 0.72/2)
rospy.sleep(2)
if __name__ == '__main__':
main()
3. Ship your code to the robot
When we work on perception, your code will only run reasonably fast if you run it directly on the robot.
We will practice shipping your code to the robot.
To do this, log into astro
with your team's username:
ssh teamN@astro
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws
catkin init
cd ~/catkin_ws/src
git clone https://github.com/fetch-picker/teamN.git fetch-picker
catkin build
source ~/.bashrc
Make sure that your IKFast plugin is also in the repository and that you've built it.
You may also generate an SSH key pair on the robot and add it to Github, as in Lab 1.
4. Run MoveIt on the robot
While SSH'ed into the robot, run the code below. You might want to open a tmux window first, so that you don't have to keep opening terminal windows to SSH into astro.
roslaunch robot_api move_group.launch
Make sure you don't get any red error messages while launching MoveIt.
5. Add the obstacles and position the robot
On your lab computer, run RViz.
setrobot astro
rosrun rviz rviz
- You may notice RViz being slower than normal.
- Add a PointCloud2 display and select the
/head_camera/depth_downsample/points
topic. - You may want to set the ColorTransformer to AxisColor.
- Also add a PlanningScene display, a Trajectory display, and your interactive marker servers.
While SSH'ed into the robot, add the obstacles:
rosrun applications lab26_obstacles.py
The obstacles are specified in the base_link
frame.
Position the table and cube until the obstacles match up with the point cloud.
It may help to change the color of the PlanningScene (e.g., to dark red) to help see the point cloud.
Finally, you should set the robot to the maximum height to improve your chances of getting good motion plans.
You should see something like this:
Do not continue unless you see the obstacles match up with the point cloud.
6. Run your gripper teleop node
Run your gripper teleop node on the robot. By "on the robot" we mean inside your SSH terminal.
rosrun applications gripper_teleop.py # Or whatever it's named
7. Execute actions carefully
When commanding the arm to move, be very careful. You should always have someone on the left side of the robot, ready to press the robot's runstop. Make sure that person knows when you are about to execute a motion plan on the robot.
Do not directly make a motion plan to a pre-grasp pose. Even with the obstacles in the planning scene, various imprecisions in the system might lead to a collision anyway. Instead, guide the arm to a clear area above the obstacle using a sequence of movements.
Check the visualization from a multitude of angles, and make sure the gripper and the fingertips are not intersecting with the point cloud. It often helps to grasp the object from the side and at a slight vertical angle. That way, you keep the bulky wrist away from obstacles, only putting the fingertips at risk. Grasping from the side also allows you to maintain vision of the object in RViz.
8. Run-stop the robot
When you are done, runstop the robot. This will prevent you or other teams from accidentally running motion plans on the real robot.
You should also close any terminal windows in which you ran setrobot astro
, such as your RViz terminal.
It often helps to isolate any terminal windows in which you run setrobot astro
in a single tmux session, so that you can easily prevent yourself from accidentally running code on the real robot.
If the arm stops working
If the arm stops working, SSH into the robot and reset the breakers.