Using TensorFlow Lite for the Ultimate Goal Challenge - Sova-Tech/FTC-2020-2021 GitHub Wiki

What is a TensorFlow Lite?

TensorFlow Lite is a lightweight version of Google's TensorFlow machine learning technology that is designed to run on mobile devices such as an Android smartphone. A trained TensorFlow model was developed to recognize game elements for the 2020-2021 Ultimate Goal presented by QualComm challenge. This TensorFlow model has been integrated into the FIRST Tech Challenge Control System software and can be used to identify and track these game pieces during a match.


This season's inference model can recognize and track a stack of four rings or a single ring.

For this season's challenge, the inference model was trained to recognize a single Ultimate Goal ring. The model was also trained to recognize a stack of four rings. This approach was adopted since it is easier to train a model to distinguish between a single and quadruple ring stack, rather than training a model that can reliably distinguish the individual rings of a multi-ring stack.

How Might a Team Use TensorFlow in the Ultimate Goal Challenge?

For this season's challenge, during the pre-Match stage a single die is rolled and the field is randomized.

When the autonomous stage of the match begins, a robot can use TensorFlow to "look" at the Starter Stack Area and determine how many (if any) rings are stacked in that area. Based on the number of rings that it "sees" the robot can deliver its wobble goal to the appropriate target zone to score additional points during the autonomous portion of the match.

Important Note on Phone Compatibility

The TensorFlow Lite technology requires Android 6.0 (Marshmallow) or higher. If you are a Blocks programmer and you are using an older Android device that is not running Marshmallow or higher, then the TensorFlow Object Detection category of Blocks will automatically be missing from the Blocks design palette.

Sample Op Modes

The FIRST Tech Challenge software contains sample Blocks and Java op modes that demonstrate how to use this TensorFlow technology to determine whether a single stack or quadruple stack of rings is visible. The sample op modes also show how to determine where in the camera's field of view a detected object is located.

Click on the following links to learn more about these sample Op Modes.

Using a Custom Inference Model

Teams have the option of using a custom inference model with the FIRST Tech Challenge software. For example, some teams prefer to use the TensorFlow Object Detection API to create an enhanced model of the game elements, or they might want to create a custom model to detect other entirely different objects. Other teams might also want to use an available pre-trained model to build a robot that can detect common everyday objects (for demo or outreach purposes, for example).

The FTC software includes sample op modes (Blocks and Java versions) that demonstrate how to use a custom inference model:

Detecting Everyday Objects

You can use a pretrained TensorFlow Lite model to detect everyday objects, such as a clock, person, computer mouse, or cell phone. The following advanced tutorial shows how you can use a free, pretrained model to recognize numerous everyday objects.


TensorFlow can be used to recognize everyday objects like a keyboard, a clock, or a cellphone.

⚠️ **GitHub.com Fallback** ⚠️