Ground truth with Kinect - northern-bites/nbites GitHub Wiki

This is a project worked on by Alex Lucyk, Eddy van der Kloot, and Johnny Coster, which uses the Kinect to view the field and determine the location (via field coordinates) of the ball and robots. This will allow us to test the robots' localization accuracy by comparing where they believe they are (and the ball is) with what the Kinect ground truth detection system says they are.

To set up this system with the Kinects we used instructions and code from the University of Texas at Austin. Their paper on this system, A Low Cost Ground Truth Detection System for Robocup using the Kinect, is helpful towards understanding the concepts behind this project. We also used instructions and documentation from the project's Wiki.

1. Installation

  • Download ROS Diamondback (the desktop-full install) from here.
  • Install the following additional packages.
    sudo apt-get install ros-diamondback-perception-pcl-addons ros-diamondback-openni-kinect
  • Download the eros package, which color_table has dependency on.
`cd
mkdir kinect-stuff
cd ~/kinect-stuff
svn co https://code.ros.org/svn/eros/tags/diamondback eros`
  • Download the source code for the austinvilla stack (this will take a while).
`cd ~/kinect-stuff
svn co https://utexas-ros-pkg.googlecode.com/svn/trunk/stacks/austinvilla austinvilla`
  • We modified the file /austinvilla/ground_truth/src/nodes/detect.cc. Replace the original file with our modified file whose code is found here.
  • Update your ~/.bashrc file with the locations of these 2 stacks by adding these lines after the original line added during the ROS installation.
`source /opt/ros/diamondback/setup.bash
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:~/kinect-stuff/eros:~/kinect-stuff/austinvilla`

Remember to call source ~/.bashrc for ros to index the stacks in the current terminal window.

  • Now you should be able to build the packages. Install any remaining system dependencies and then call rosmake.
`rosdep install austinvilla
rosmake austinvilla`

2. Setting up the Kinects

We decided it would be best to use two Kinects to view the entire field, one for each half. We only had time, however, to set up one but another has been purchased and can be easily set up in the future. We mounted the Kinect about 3 meters above the sideline at around the midpoint of the blue goal's half of the field. The sensor was adjusted (by launching the calibrate node roslaunch ground_truth calibrate.launch) to view the close and far side lines and well as center circle and goalie box. The only L-corner it can't see is the one closest to it (because of how close the wall is to the field...perhaps this can be improved).

The second Kinect should be set up about 3 meters up on the opposite wall in the middle of the yellow goal's half. The two Kinects will have some overlap in the middle.

3. The Color Table

3.1 Getting a bag file

  • Launch the openni_kinect driver.
    roslaunch openni_camera openni_node.launch
  • Verify that the kinect is publishing an image by run the following command in a separate tab.
    rosrun image_view image_view image:=/camera/rgb/image_color
  • Ensure that the kinect is pointing at some appropriate colors that you want to segment. Run the following command in a separate tab to record the bag file. To finish recording kill, the rosbag record process with a Ctrl+C.
    rosbag record /camera/rgb/image_color

3.2 Constructing the color lookup table

  • Launch the tool. In order to launch you must be in the kinect-stuff/austinvilla/color_table directory.
    rosrun color_table color_table
    NOTE: if you get a seg fault when launching the tool or opening a bag file, just keep trying. It'll work eventually.
  • Open a bag file in the Classification Tool window.
  • The default color table will automatically load. Use the Vision window to adjust the color table (similar to the qtool). Detection with the kinect is pretty good, so the color table does not need to be overly accurate. Save when finished.

4. Calibrate the Kinect

  • Launch the Kinect driver.
    roslaunch openni_camera openni_node.launch
  • Launch the calibrate node.
    roslaunch ground_truth calibrate.launch
  • Follow the instructions on the bottom of the pointcloud visualizer screen. The calibration process is laid out below:
    • Click on 5 arbitrary points on the field (click on the image, not the pointcloud). This will help the Kinect distinguish between the field and other objects in the image.
    • A separate small window will show a picture of the field (including the blue and yellow goals). You will next go around clicking on every visible L-corner, T-corner, and field cross in the image in a particular order. A black dot in the smaller screen will denote which point on the field (corner) to click on next. Left click that point and then right click to move the black dot to the next spot. Ctrl + Right Click to move the black dot back and re-click that spot on the field. If the black dot is on a point on the field not visible by the Kinect being calibrated, just skip that point by right clicking.
    • Once all these landmarks have been clicked, a 3D pointcloud representation of the section of the fiend that's visible will appear (very cool). The origin of the set of axis that appears denotes where the Kinect believes the center of the field is. Repeat this calibration process until the origin and actual field center reasonably match up.

5. Robot and Ball Detection

5.1 Running the detect node

  • Launch the Kinect driver.
    roslaunch openni_camera openni_node.launch
  • Launch the detect node.
    roslaunch ground_truth detect.launch
    You should see a 3D representation of the field with any balls and robots shown. Pretty frickin cool.

5.2 Obtaining coordinates of objects

The code as it came did not print out the coordinates of the detected objects so this is where we had to add code of our own. As mentioned in the "Installation" section, we modified the file /austinvilla/ground_truth/src/nodes/detect.cc. With our new code, the exact location of any robots or balls detected are written to the files ~/.ros/robotLog.txt and ~/.ros/ballLog.txt every second or so in the format:

`Robot/Ball  at location 
`

When the detect node is first run, the above files with the appropriate time and date are created and the data is written to them continuously. Only when the node is closed and reopened are new files created with a new time and date.

The pointcloud views the center of the field as (0,0) yet our C++ code considers a particular corner (0,0) when the robots are localizing themselves. Therefore, we converted the coordinate system of the Kinect (through a simple mathematical transformation) to an origin at the blue goal's right corner. This way, it will be easy to compare the Kinect's output of a robot's location with the robot's belief of it's location.

6. Future Work

At some point the second Kinect should be set up so the whole field is viewable. The paper talked of the two Kinects running independently so we don't know if it would be possible combine the pointclouds of the two Kinects to get a representation of the whole field. This probably isn't even needed.

The next step to take would be to find where the robots print their localization data. It would then just be a matter of printing side-by-side the Kinect's location data of a robot and the robot's localization data of it's own location. This will be very beneficial towards determining how accurately our robots can calculate their exact position on the field and how that changes over time.

⚠️ **GitHub.com Fallback** ⚠️