Tic Tac Toe Project - OSU-AIMS/tic-tac-toe GitHub Wiki

Phase 1: Static Board & Pieces

Purpose of Phase 1:

  • Proof of Concept of using the robot to loop motions of computer vision to scan the board & determine next move + robot motion to pick & place an object

Project Description:

  • A tic-tac-toe board was printed on a piece of paper
  • Blocks were used for ease of picking them up.
  • The computer vision was used to recognize the X's & O's rather than the blocks
  • AI to determine next move is based on Minimax Algorithm code found on GitHub (insert link)

Project Constraints

  • The Human player was always O & the computer was always X
  • Human goes first, followed by robot
  • Tic-tac-toe board in a static location and orientation for each play
  • Pieces the robot will pick up in a static location
  • Specific application of tic-tac-toe is made for the Motoman MH5L robot

Limitations:

  • Static board required precise setup for both camera view and board location. Any discrepancy in either would result in inaccurate or invalid robot moves.

Computer Environments:

  • Python 2.7
  • Ubuntu 18.04
  • Motoman MH5L Robot

Physical Setup:

  • Tic-tac-toe board printed on 8.5x11 in paper
  • Blocks were used for ease of picking & were made using scrap wood

Above Images From left to right:

  • Area for board and block placement.
  • Distance from robot center to corner of top right blue tape = ~ 67 cm
  • Distance between block place(left blue tape) & board placement (right blue tape) = ~ 30 cm
  • Complete setup of the tic-tac-toe game

Phase 2: Move-able board & Static X pieces (no fiducial markers)

Purpose of Phase 2:

  • Exploration of capabilities and limitations of using only contour detection to find position/orientation without the use of fiducial markers.

Project Description:

  • 3D printed tictactoe board (White ABS)
  • 3D printed X and O pieces (White ABS)
  • The computer vision only sees player's O pieces, X pieces are in predetermined locations for robot to move
  • AI to determine next move is based on Minimax Algorithm code found on GitHub (insert link)
  • Utilizes rostopics for camera image data (realsense-ros)

Project Constraints

  • The Human player is O & the computer is X
  • Full Tic-tac-toe board has to be in view of camera
  • Camera always in same orientation when scanning board, camera rotation transform hardcoded
  • Pieces the robot will pick up in a static location
  • Requires custom moveit config for camera transform

Limitations:

  • The symmetry of the square board makes full 180 degree orientation detection very difficult. The team tried using both PCA/eigenvectors and point slope from square corners. Both methods could not differentiate any multiple of 45 degrees for the square board. The board must be below a +/- 45 degree rotation perpendicular to the camera for accurate gameplay loop.

Using markers such as colored dots or ArUco markers to break the symmetry and allow OpenCv to more accurately detect orientation of the tic-tac-toe board. See Phase 3 for more information.

Computer Environments:

  • Python 2.7
  • Ubuntu 18.04
  • Motoman MH5L Robot
  • Realsense-ros

Physical Setup:

TODO: insert images and/or video

  • 3D printed tic-tac-toe board with pieces

Phase 3: Move-able board & Static X pieces (fiducial markers- Colored dots)

Purpose of Phase 3:

  • Exploration of capabilities and limitations of using only contour detection to find position/orientation using the colored squares to break symmetry
  • Using fiducial markers (3 colored squares) to detect orientation of tic-tac-toe board

Project Description:

  • 3D printed tictactoe board (White ABS)
  • 3D printed X and O pieces (White ABS)
  • The computer vision only sees player's O pieces, X pieces are in predetermined locations for robot to move
  • AI to determine next move is based on Minimax Algorithm code found on GitHub (insert link)
  • Utilizes rostopics for camera image data (realsense-ros)
  • Colored dots are recognized using Dream 3D for first attempt --> Dream3D works with static image not frames from video
  • Used color detection to find red-blue-green squares and obtain orientation

Limitations:

  • Dream3D works with static images and the game board can move in-between player moves. Using a static image would not be ideal in this scenario. Now using an image topic to obtain a frame and perform object & orientation detection.

Computer Environments:

  • Python 2.7
  • Ubuntu 18.04
  • Motoman MH5L Robot
  • Realsense-ros
  • OpenCV version 4.2.0

Physical Setup:

Above images: Top left: Game board with colored squares as markers. Top Right: Orientation detection using colored squares as markers

Phase 4: Using Image Kernel to detect color location - Move-able board & Static X pieces

Purpose of Phase 4:

  • Using image kernel to detect location of blue, green, and red squares in the image
  • Obtain location in image, obtain orientation

Project Description:

  • Using image Kernel of RBG to pass over image, create a heat map of where the color red, blue, green are in the image.
  • Determine pixel location based on heatmap
  • Determine orientation by how much kernel matrix needs to rotate to match RGB values

Reason for Image Kernel:

  • We know that the colors green, red, blue are in the image, but we need to find the location.
  • Phase 3 method finds the color in the image and then finds the location. The limitations of Phase 3 is detecting colors purely through RGB values that fall within a threshold. This leads to inaccurate detection, inconsistent readings, and fluctuating color detection.
  • Phase 4 implements a kernel which reads the RGB values in the image and sees if the match to Red (255,0,0), Green (0,255,0), or Blue (0,0,255) and create a heatmap of where the colors closest to True Red, Green, and Blue are in the image. This is more accurate because kernels do not rely on a threshold to detect the color, they use convolutions to output the spot that match closely to the RGB values of Red, Green, and Blue. Even if the lighting fluctuates, this method should be robust enough to detect the color location.

Computer Environments:

  • Python 2.7
  • Ubuntu 18.04
  • Motoman MH5L Robot
  • Realsense-ros
  • OpenCV version 4.2.0

Physical Setup:

  • X-pieces at constant distance from robot pedestal
  • transform of board obtained using fiducial markers (Red, Green, Blue circles on corners of board) Screenshot from 2022-03-02 09-35-40

Demonstration:

Further Work:

  • Robot motion needs to be constrained or smoothed out to avoid extraneous robot motion Example of Extraneous Robot Motion Context: Robot should move linearly down to pick X, linearly back up to move to the location where it wants to put the X, then down to drop X.
  • Template Matching Algorithm can be more robust to lighting, scale, and rotation changes