Camera and Vision Processing - GirlsOfSteelRobotics/2016GirlsOfSteel GitHub Wiki
Claire says: after 12/8, I have a bunch of links and documents regarding the camera and vision processing. I thought it might be better to share them with everyone rather than losing them in the mess of my desktop! We don't have all the answers, yet, but given our goal to use the camera both in preseason and in the regular season, we should share information as we learn it.
There are really several tasks related to vision processing. First, you need to hook up the camera and get information from it. Then, you want to process the images in some useful way.
Important Facts
Here I will put lessons we've learned so far about camera and image processing. If this gets long, it can get its own wiki page:
- There needs to be a point light mounted RIGHT NEXT to the camera on the robot, or the reflectance on the tape is not visible.
Example Eclipse projects
The Example Java projects include several very helpful pieces of code. Go to File->New->Other...->WPILib Robot Development->Example Robot Java Project. The example projects are:
- SimpleVision
- Intermediate Vision
- 2015 Color Vision Sample
- 2015 Retro Vision Sample
- Axis Camera Sample
The first one just gets the camera feed and sends it to the Dash. The others do a bit more processing. Actually the 2015 Vision Samples don't get the image from the camera at all; they do a bunch of image processing instead. We will need to put the pieces together.
The 2015 Vision samples both have a compilation error in them which is kind of annoying; since I haven't tried to run them, I haven't tried to fix them, but don't be alarmed when that happens to you.
Camera setup
This is pretty easy:
(You can see it in the Simple Vision example, above; I also found a helpful post: http://www.chiefdelphi.com/forums/showthread.php?p=1427102&highlight=CameraServer#post1427102)
CameraServer camera = new CameraServer();
camera.setQuality(50);
camera.startAutomaticCapture("cam0");
I think that's right for the USB camera (that "cam0"). We will have to test. But, this will only work for initial setup---it sends the picture right to the dashboard, no processing allowed. If we want to process, we need to get the image, do the processing, then set the image on the dashboard.
Image processing
I don't have all the answers on this, but I do have a couple of useful references. We may want to make another wiki page or sections to explain the concepts to ourselves, but here are some places to start:
This link from the FRC tutorials for 2014 has a nice overview of the steps involved in image processing to identify a target. It's centered around the 2014 task, which had those strips of tape, but the basic idea (mask, identify particles, figure out shape, recognizing the shapes, computing distance, etc) applies to any kind of target identification. What will change is the geometry of the thing we're looking for. Reading this and THEN looking at the 2015 Vision Samples was really helpful for me to figure out what was going on.
I think that the Java code explanation from 2014 should map onto the 2015 samples as well; it walks through the code and explains things like the filters and particle identification that's happening there. I haven't looked closely at it.
Another example is here. It talks more about LabView, but has some description of the API calls that we might use in processing.
I also downloaded the IMAQ Vision Concepts Manual from NI, but it's 300 pages long, so I wouldn't try to read it top to bottom if I were you. It might be a useful reference, though, on certain ideas in image processing.