Tracking stuff using OpenCV - HU-ICT-LAB/RobotWars GitHub Wiki
A good exercise
It's a good exercise to create a simple script that has the following points:
- Getting observation
- Processing the observation in a familiar framework
- Taking action based on the processing information
The script below does this by getting camera frames into Numpy arrays, processing those using OpenCV haarcascade to track objects, rotate the gimbal to point to the objects and fire.
The script
from robomaster import robot, blaster
import cv2
import imutils
gimbal_speed = 1.5 # Speed to move the gimbal by
gvx, gvy = 0, 0 # Starting gimbal velocities
# Load the Haar Cascade
cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Setup the window to draw to
window_name = 'Robot'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)
cv2.resizeWindow(window_name, 800, 800)
# Connect/setup the robot
ep_robot = robot.Robot()
ep_robot.initialize(conn_type="sta", sn="159CGAC0050QS0") # SN is the Serial Number of the specific robot
ep_camera = ep_robot.camera
ep_blaster = ep_robot.blaster
ep_gimbal = ep_robot.gimbal
ep_led = ep_robot.led
ep_gimbal.recenter()
ep_camera.start_video_stream(display=False, resolution='360p')
while True:
# Get the newest image from the camera and resize
img = ep_camera.read_cv2_image(strategy='newest')
img = imutils.resize(img, width=250)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Detect objects
bounding_boxes = cascade.detectMultiScale(gray, 1.1, 4)
# Draw the bounding_boxes
for (x, y, w, h) in bounding_boxes:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
if len(bounding_boxes) > 0:
dx = sum([x+w/2 for x, _, w, _ in bounding_boxes]) / len(bounding_boxes) - img.shape[1] / 2 # Average horizontal offset to center
dy = sum([y+h/2 for _, y, _, h in bounding_boxes]) / len(bounding_boxes) - img.shape[0] / 2 # Average vertical offset to center
gvx, gvy = dx*gimbal_speed, -dy*gimbal_speed # Update gimbal velocities using the offsets to center
ep_led.set_gimbal_led(r=0, g=255, b=0) # Set the gimbal lights to green
ep_blaster.fire(fire_type=blaster.WATER_FIRE, times=1) # Fire the blaster
else:
# Apply friction to the gimbal
gvx *= 0.7
gvy *= 0.7
ep_led.set_gimbal_led(r=255, g=0, b=0) # Set the gimbal lights to red
# Update the physical gimbal velocity using the newly calculated
ep_gimbal.drive_speed(pitch_speed=gvy, yaw_speed=gvx)
# Draw the image to the window
cv2.imshow(window_name, img)
cv2.waitKey(1)
# Stop the loop when the [X] window button is pressed
if cv2.getWindowProperty(window_name, cv2.WND_PROP_VISIBLE) < 1:
break
# Set everything back to normal
cv2.destroyAllWindows()
ep_gimbal.drive_speed(pitch_speed=0, yaw_speed=0)
ep_gimbal.recenter()
ep_camera.stop_video_stream()
ep_robot.close()
Notes / Instructions
To run the script above you'll need the following libraries:
And a haarcascade file, for example haarcascade_frontalface_default.xml, this one will track faces.
You'll probably want to change the Serial Number,
and play with gimbal_speed
depending on your network lag.
Sources:
- D. (n.d.). RoboMaster-SDK. GitHub. Retrieved October 22, 2021, from https://github.com/dji-sdk/RoboMaster-SDK
- Introduction — RoboMaster Developer Guide documentation. (n.d.). RoboMaster Developer Guide. Retrieved October 15, 2021, from https://robomaster-dev.readthedocs.io/en/latest/introduction.html
Related issues
Issues: #25