Color Segmentation - northern-bites/nbites GitHub Wiki

For better or worse we still use color segmentation heavily in vision. We get the image as a series of tuples. Each tuple can be from 0-255. What we want to do is turn those values into colors like GREEN, or ORANGE. In other words we are mapping millions of values down into roughly 8 values. Our system currently uses GREEN, ORANGE, BLUE, YELLOW, WHITE, RED, and NAVYBLUE. We do this mapping through a color table or "lut" (look-up table).

This year we switched to bitwise representations for colors. In other words we assign a specific bit for each color. If at some point we decide that the tuple is RED then we set the RED bit. For that tuple we could also end up setting the ORANGE bit as well. In that way we can easily test for the presence of a color and get things like "soft colors" basically for free.

To build the mapping function we use the Tool (old tool) and are moving to the QTool (new tool). In the old tool we basically had a point and click system. You saw something green, you picked green and clicked on the green patch. The tool would then figure out which table entries those points in the image corresponded to and would set them accordingly. This works well enough, but it is time consuming and really doesn't take advantage of any generalization. The new tool aims to include much of the same functionality but with more ability to quickly fill in large regions of the color table.

Even with improvements in color using color is still problematic. Something that looks green in one context looks blue in another. Further there are always people standing around the field wearing blue clothes, etc. Finally as lighting changes so do our color definitions. So what can we do? There are some possible approaches to help:

  • Use edge-based methods. These methods work to identify edges between objects. Once an object has been outlined then you need only assess its general color, not the color of each pixel.
  • Use color gradient methods. These methods are similar to edge-based methods and work by monitor how quickly the components that make up the color are changing. The beauty of these is that they work on relative change and aren't so sensitive to the amount of light.
  • Enjoy the fact that RoboCup is slowly losing all of its colors anyway. We have already greatly reduced the number of colored objects. This year we may lose another as it is likely that both goals will have the same color.