Lab 34: Perception enabled picking - robotic-picker-sp22/fetch-picker GitHub Wiki

You now have all the components for performing a "real" (perception-based) robotic pick from the shelf. In this lab you will put things from previous labs together to achieve that.

Item inventory

Before we can do an end-to-end pick attempt similar to the Amazon Picking Challenge, it might be useful to create simple infrastructure for organizing information about the items in the robot's inventory. At any given time the robot should have the full information of which bin on the shelf contains which items. When you make a request to the robot to pick a particular item from a particular bin, the robot should have the correct information of item/bin distributions to match the current state of the world. Hence, you as the experimenter need a way to tell the robot item/bin distributions. You also need an interface for making the pick requests, whether through the terminal or though a webpage.

Integration

Building a demo on the Fetch for picking up a requested item will mainly involve integrating parts you have already developed.

On the perception side you will use the perception capabilities (bin cropping, segmentation, recognition) to detect the requested item. Note that the recognition problem is subtly different in this context as you know which items are in a particular bin. Hence instead of trying to categorize each item as one of many item categories, you have fewer options and you can use the additional constraint that each object known top be in the bin needs to be assigned to something perceived in the bin.

On the manipulation side, you can use a simple heuristic to compute a grasp pose (as well as pre-grasp and extraction poses) or you can use the PbD system from Lab 31 to program different extraction strategies to choose from.

You might choose to implement your full system as a Finite state machine to keep track of where failures occur.