Lab 30: AR tag perception - robotic-picker-sp22/fetch-picker GitHub Wiki
In previous labs, you have programmed the robot to grasp an object, but you had to supply the location of the object through an interface.
Robot perception allows the robot to understand the environment, including the locations of objects, automatically.
In this lab, we will use AR tags, which can be easily spotted and uniquely identified by the robot.
AR tags are easy to use, but requiring too many AR tags to be put in the environment can make for a less appealing product.
Recording perception data with markers
We have distributed AR tags randomly to each group. Take turns with other teams to set up your markers on the shelf using blue tape and record a point cloud. Use RViz to make sure you can see at least three markers, then run:
rosrun perception save_cloud tags
rosbag info tags.bag
mkdir ~/data
mv tags.bag ~/data
Next, run the hallucinator you wrote in Lab 28 and check RViz to see your point cloud.
ar_track_alvar
We will use the ar_track_alvar package to track AR tags. We have already printed out some tags for you to use.
Configure for the Fetch
Read the documentation for ar_track_alvar.
Section 4.1 explains that you can detect markers by running the individualMarkers
node.
It also explains various arguments that you can pass in.
Rather than set all of the arguments on the command line, you should create a launch file.
The package provides a launch file to run on a PR2 robot.
You can copy this launch file and edit it to configure ar_track_alvar
to run on your computer for the simulated Fetch.
roscp ar_track_alvar pr2_indiv.launch ~/catkin_ws/src/robot_api/launch/ar_desktop.launch
Edit the launch file and make the following changes:
- The
marker_size
is 5.6 cm, not 4.4 - Set
cam_image_topic
to/head_camera/depth_registered/points
. - Set
cam_info_topic
to/head_camera/rgb/camera_info
- Change the
output_frame
tobase_link
.
Look for AR tags
Now, you can finally run the AR tracker on your pre-recorded data!
Note that we need to set a different value of cam_image_topic
so that the AR tag tracker will subscribe to our mock data instead of the simulated depth camera.
roslaunch robot_api ar_desktop.launch cam_image_topic:=mock_point_cloud
Visualize the output
According to the documentation, individualMarkers
publishes a visualization of the markers to visualization_marker
.
In the future, you may want to remap this topic to something else.
For now, you can visualize the output by running RViz and adding a marker display.
Access the data programmatically
To actually read the poses of the AR tags, you can subscribe to the ar_pose_marker
topic:
rostopic echo /ar_pose_marker
Reaching for a marker
Next, let's program the robot to reach to one of the hallucinated markers in simulation.
Create a demo called hallucinated_reach.py
in which the robot reaches for one of the AR marker locations.
Hints:
- Run
rostopic echo /ar_pose_marker
to get a sample of the data. Pay close attention to the headers and frame_ids. - arm.move_to_pose commands a position for the
wrist_roll_link
, not thegripper_link
. Therefore, the wrist should move over one of the markers. It's okay for your gripper to collide with the scene in this lab because we are just demonstrating the functionality of the system. - Be sure to run MoveIt before running this code
- Raise the torso to full height before starting
- It is normal for some markers to not be found. If none of them are found (or none of them are reachable), try setting up your tags differently and recording more point clouds.
- Note that the markers have arbitrary orientations that are usually not suitable for the gripper to be in. When reaching for the marker, you should aim to send the gripper to the marker's position, but not with the same orientation as the marker. You can use the identity orientation for the gripper.
#! /usr/bin/env python
from geometry_msgs.msg import PoseStamped
import robot_api
import rospy
def wait_for_time():
"""Wait for simulated time to begin.
"""
while rospy.Time().now().to_sec() == 0:
pass
class ArTagReader(object):
def __init__(self):
self.markers = []
def callback(self, msg):
self.markers = msg.markers
def main():
wait_for_time()
start = PoseStamped()
start.header.frame_id = 'base_link'
start.pose.position.x = 0.5
start.pose.position.y = 0.5
start.pose.position.z = 0.75
arm = robot_api.Arm()
arm.move_to_pose(start)
reader = ArTagReader()
sub = rospy.Subscriber(...) # Subscribe to AR tag poses, use reader.callback
while len(reader.markers) == 0:
rospy.sleep(0.1)
for marker in reader.markers:
# TODO: get the pose to move to
error = arm.move_to_pose(???)
if error is None:
rospy.loginfo('Moved to marker {}'.format(marker.id))
return
else:
rospy.logwarn('Failed to move to marker {}'.format(marker.id))
rospy.logerr('Failed to move to any markers!')
if __name__ == '__main__':
main()
Update Cropper parameters based on marker
Lastly, you will use marker perception to make a bin point cloud cropper that is robust to movements of the shelf.
Create a demo called bin_marker_cropper.py
in which the robot detects one or two AR markers attached to the shelf at a fixed configuration and updates the parameters for the cropper that you wrote in Lab 29. Since the bin remains at a fixed pose relative to the marker, the pose of the marker should help you figure out the pose of the bin as the shelf moves around. After tuning your marker-enabled smart cropper in simulation with the hallucinated data, test it on the real robot with different poses of the shelf.
Since the shelf rotation can change (which will be detected with the marker transform), you will need to use setRotation
and setTransform
from CropBox.