ArUco marker detection - mintforpeople/robobo-programming GitHub Wiki
ArUco marker detection in Python
An ArUco marker is used as artificial beacon in robotics for location. The method implemented here is based on the OpenCV implementation
To work properly, first of all, Robobo users must perform a calibration process, as explained here:
ArUco detection uses the frames captured by the smartphone's camera to detect the markers presents in the scene. To use this method it is necessary to first create an instance of the Robobo class, send the IP of our smartphone, and connect it. Then, the readTag()
method is called and it returns an object containing all the information.
readTag( )
Returns the last ArUco Tag detected by the robot.
Returns
A Tag object
Return type Tag
The steps followed are shown below:
import sys, os
sys.path.append(os.path.join(os.path.dirname('_file_'), '.', 'robobo.py'))
from Robobo import Robobo
IP = "192.168.8.116" #change depending on your case
rob = Robobo(IP)
rob.connect()
rob.wait(0.1)
# Start Aruco Tag Detection method
rob.startArUcoTagDetection()
# Adjusting the size of the marker
# In the case of the example, the marker has a size of 50 mm on each side.
rob.changeTagSize(50)
# Read and print Aruco
aruco = rob.readTag()
print(aruco)
# Stop Aruco Tag Detection method
rob.stopArUcoTagDetection()
The following message is displayed when the reading is done.
Aruco, Id:3 cor1:{'x': 274, 'y': 355} cor2:{'x': 205, 'y': 356} cor3:{'x': 204, 'y': 423} cor4:{'x': 271, 'y': 422} tvecs:{'x': 0.01595161310201788, 'y': 90.26916097755098, 'z': 292.41035879251626} rvecs:{'x': -3.0443320209844424, 'y': 0.02413107153862281, 'z': 0.08075085940152558}
The object returned by the method contains 7 values:
- The ArUco ID.
- (x,y) coordinates of the four corners.
- Translation vector.
- Rotation vector.
The ID identifies the ArUco marker according to a binary drawing.
The (x,y) coordinates of the four corners identify the position of the marker in the image, as shown in the following image:
Bottom image shows the standard nomenclature used by the original Aruco library, where the corners are clockwise labelled with numbers. Top image corresponds to the nomenclature typically used with Robobo, because the frontal camera of the Smartphone returns a mirrored image, so the corners and counterclockwise labelled.
The translation vector gives us the distance between the ArUco and our robot. It has three terms (x, y, z). The first term provides the distance on the "x" axis between the ArUco and the Robobo. The second one returns the distance on the "y" axis, and the third one on the "z" axis.
The rotation vector returns the angle on all three axes, between the ArUco and the Robobo. It also has three components (x, y, z). The first component returns the angle of rotation of the ArUco on the "x" axis, relative to our robot. The second, its rotation around the “y” axis, and the third, around the “z” axis. This method detects the ArUco in the scene if the four corners are inside the image.
Each value can be accessed individually by typing the following:
id= obj.id
cor1 = obj.cor1
cor2 = obj.cor2
cor3 = obj.cor3
cor4 = obj.cor4
tvec = obj.tvecs
rvec = obj.rvecs
‘cor1’, ’cor2, ‘cor3’ and ‘cor4’ correspond to the four corner coordinates of the ArUco marker in the image. ‘tvecs’ corresponds to the traslation vector, and ‘rvecs’ to the rotation vector. They are two Python dictionaries. To access to the individual terms in the dictionary, we must type the following:
x = obj.tvecs[“x”]
y = obj.tvecs[“y”]
z = obj.tvecs[“z”]
Usage example
The following script shows an example where we call the method readTag()
and use the four corner coordinates to draw the edge of the marker. With the traslation vector (tvecs), we use the “z” term to show the distance between the ArUco and the Robobo. This example uses the video streaming library, that must be downloaded from: https://github.com/mintforpeople/robobo-python-video-stream:
import cv2
import numpy as np
import sys, os
sys.path.append(os.path.join(os.path.dirname('_file_'), '.', 'robobo.py'))
from Robobo import Robobo
sys.path.append(os.path.join(os.path.dirname('_file_'), '.', 'robobo-python-video-stream-master/robobo_video'))
from robobo_video.robobo_video import RoboboVideo
def drawAruco(image,id, cor1, cor2, cor3, cor4):
try:
pts = np.array([(cor1['x'], cor1['y']), (cor2['x'], cor2['y']), (cor3['x'], cor3['y']), (cor4['x'], cor4['y'])])
cv2.polylines(image, [pts], True, (0, 255, 0), thickness=2)
x, y = int(cor1['x']), int(cor1['y'])
cv2.circle(frame,(x,y), 3, (0,0,255),thickness=-1)
cv2.putText(image, '1', (x, y + 20), 1, 1, color=(0, 0, 255), thickness=1)
x, y = int(cor2['x']), int(cor2['y'])
cv2.circle(frame, (x, y), 3, (0, 255, 255), thickness=-1)
cv2.putText(image, '2', (x, y + 20), 1, 1, color=(0, 255, 255), thickness=1)
x, y = int(cor3['x']), int(cor3['y'])
cv2.circle(frame, (x, y), 3, (255, 0, 0), thickness=-1)
cv2.putText(image, '3', (x, y + 20), 1, 1, color=(255, 0, 0), thickness=1)
x, y = int(cor4['x']), int(cor4['y'])
cv2.circle(frame, (x, y), 3, (255, 0, 255), thickness=-1)
cv2.putText(image, '4', (x, y + 20), 1, 1, color=(255, 0, 255), thickness=1)
cv2.putText(image, 'ID: '+str(id), (x, y + 40), 1, 1, color=(128, 128, 255), thickness=2)
except IndexError:
pass
return image
IP = '192.168.8.116' #change depending on your case
rob = Robobo(IP)
rob.connect()
rob.startArUcoTagDetection()
video = RoboboVideo(IP)
video.connect()
rob.setLaneColorInversion(False)
while True:
frame = video.getImage()
print(frame.shape)
# obtain the coefficients to the left and right lane and the reverse transformation matrix
obj = rob.readTag()
print(obj)
img = drawAruco(frame, obj.id, obj.cor1, obj.cor2, obj.cor3, obj.cor4)
value = obj.tvecs['z'] / 10
print(f'Distance: {value}\n')
cv2.imshow('smartphone camera', img)
if cv2.waitKey(1) & 0xFF == ord('q'):
video.disconnect()
cv2.destroyAllwindows()
break
rob.stopArUcoTagDetection()
The following image shows the result of the scrip when the robot detects a Aruco tag that is placed in front of it:
Acknowledgement
Developing an artificial intelligence curriculum adapted to european high schools 2019-1-ES01-KA201-065742 More information: aiplus.udc.es
This project has been funded with support from the European Commission. This web reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.