Obstacle Detection and Image Processing - Carleton-SRCL/SPOT GitHub Wiki

ODC_SS.py

Click here for raw code
   ''' Code used to detect obstacles using image processing methods, rather than the ArUco markers as is on the target

Written by Shae Simonson, Hayden Arms, Adrian Comisso, and Finnian Tuck 2023/2024
contacting author: [email protected]

some of the logic was based on the stack overflow question and responce that can be found at
https://stackoverflow.com/questions/60357533/separate-objects-countours-with-opencv

some of the functions were taken directly from the openCV website'''

# Modules and Packages required to run code
import cv2

import numpy as np
import os
from datetime import datetime
from cv2 import aruco
import math
import SDM_AC
import ADPT_HA

'''File Locations of images used on Shae Simonsons's computer. 
Once code is working well, these will be replaced by live video feed

ensure only one image is uncommented out at once, otherwise it might not grab the image needed'''
# worst case scenario
# img_path = r'C:\image.jpg'
# img_path = r'C:\image.jpg'


# img_path = r'C:\image.jpg'

# img Used for building this code that this code works
# img_path = r'C:\image.jpg'

# dosent work yet
# img_path = r'C:\image.jpg'


# Testing measurments of distance estimation
# img_path = r'C:\Measure_1.jpg'


# lighting that works
# img_path = r'C:\WIN_20231212_17_52_46_Pro.jpg'

# Lighting that dosent work (issue with the checking for simmilar size of boxes?)
# img_path = r'C:\WIN_20231212_17_52_50_Pro.jpg'

# img_path = r'C:\WIN_20231212_17_52_51_Pro.jpg'

# lighting and half in frame
# img_path = r'C:\WIN_20231212_17_55_45_Pro.jpg'


########################################################################################################################
# Constants and Factors used in this script
########################################################################################################################

# # for display purposes #NOT NEEDED FOR MAIN CODE
# img_red_fact = 55
# scale_percent = img_red_fact  # percent of original size

########################################################################################################################
# Constants and Factors used in this script
# cropped_factor = 0.7
#
# kernol_size_blur = 3 #HAYDEN HAD 3
# kernol_size_morph_open = 2  # HAYDEN HAD 2       # 7 from the code online on stack overflow https://stackoverflow.com/questions/60357533/separate-objects-countours-with-opencv
# kernol_size_morph_close = 2  # HAYDEN HAD 2        # 13 from the code online at https://stackoverflow.com/questions/60357533/separate-objects-countours-with-opencv
# threshold_after_blur = 170  #HAYDEN HAD 130              # max value of 255 #recently adjusted from 170 - to be more inclusive
#
# C_adaptive = 12
# blockSize_adaptive = 13
# min_contour_area = 50    #HAYDEN HAD 10
#
# threshold_distance = 230
# ARUCO_threshold_distance_factor = 1.5
# min_box_size = 300  # Set your minimum box size here
# min_box_size_after_merge = 3000
# line_thickness = 8
#
# focal_length_pixels = 694.88  # Pixels
# field_of_view_horiz = math.pi / 2  # Degrees
# baseline = 0.120  # Camera separation distance in meters
#
# similarity_threshold_percentage = 20
#
# resize_factor = 34
#
# calib_data_path = r'C:\CalibrationFolder'
# output_file_path = r'C:\FinalImages'




cropped_factor = 0.7

kernol_size_blur = 3
kernol_size_morph_open = 2  # 7 from the code online on stack overflow https://stackoverflow.com/questions/60357533/separate-objects-countours-with-opencv
kernol_size_morph_close = 2  # 13 from the code online at https://stackoverflow.com/questions/60357533/separate-objects-countours-with-opencv
threshold_after_blur = 170  # max value of 255 #recently adjusted from 170 - to be more inclusive

C_adaptive = 13
blockSize_adaptive = 15
min_contour_area = 10

threshold_distance = 500
ARUCO_threshold_distance_factor = 1.5
min_box_size = 300  # Set your minimum box size here
line_thickness = 8
min_box_size_after_merge = 3000

focal_length_pixels = 694.88  # Pixels
field_of_view_horiz = math.pi / 2  # Degrees
baseline = 0.120  # Camera separation distance in meters

similarity_threshold_percentage = 20

resize_factor = 34

calib_data_path = (r"Big")
# calib_data_path = r'C:\CalibrationFolder'
# calib_data_path = r'C:\CalibrationFolder'
output_file_path = r'C:\OutputFolder'



########################################################################################################################
# Functions used in this script
########################################################################################################################

def save_image(image, filename_prefix):
   timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
   filename = f"{filename_prefix}_{timestamp}.jpg"
   output_path = os.path.join(output_file_path, filename)
   cv2.imwrite(output_path, image)
   print(f"Saved: {output_path}")

def resize_for_display(image, scale_percent):
   width = int(image.shape[1] * scale_percent / 100)
   height = int(image.shape[0] * scale_percent / 100)
   dim = (width, height)
   # Resize the image to the specified dimensions for display
   resized_image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
   return resized_image


# Split the image into two halves for each camera lens
def split_image(image):
   # check if the image is 2D or 3D
   if len(image.shape) == 2:
       # convert to 3D where each layer is Red, Green, and Blue
       image = np.expand_dims(image, axis=2)

   # dimensions of the image
   dimensions = image.shape
   height = dimensions[0]
   width = dimensions[1]
   midpoint = width // 2

   # split the image into two halves
   left_half = image[:, :midpoint, :]
   right_half = image[:, midpoint:, :]

   # return the two halves
   return left_half, right_half


# get rid of scaled_height percentage of the top of the image
def crop_bottom_scaled(image, scaled_height):
   # Function to cut off a given factor of the top of the image, for ex, 0.6 returns 60% of image from bottom to 60%
   if isinstance(image, str):
       # Read image using OpenCV
       img = cv2.imread(image)
       # Convert BGR to RGB
       img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
   elif isinstance(image, np.ndarray):
       # If input is already a NumPy array, assume it's in RGB format
       img_rgb = image
   else:
       raise ValueError("Input must be either an image path (str) or a NumPy array.")

   # Get the dimensions of the image
   height, width, _ = img_rgb.shape
   # Calculate the new height based on the scaling factor
   new_height = int(height * scaled_height)
   # Crop the bottom scaled part
   cropped_img = img_rgb[height - new_height:, :, :]
   # package the cropped dimensions
   cropped_dim = (new_height, width)
   # Return the cropped image
   return cropped_img, cropped_dim


# Resize the image and then calibrate it
def Calibrate_and_resize_image(image, calib_data_path):
   # import ditortion matrices
   l_cam_mat = np.load(calib_data_path + '\Left_calibration_matrix.npy')
   l_dist_coef = np.load(calib_data_path + '\Left_distortion_coefficients.npy')
   r_cam_mat = np.load(calib_data_path + '\Right_calibration_matrix.npy')
   r_dist_coef = np.load(calib_data_path + '\Right_distortion_coefficients.npy')

   # undistort images
   imgL, imgR = split_image(image)
   imgR = cv2.resize(imgR, (1280, 720))
   img_und_R = cv2.undistort(imgR, r_cam_mat, r_dist_coef)
   imgL = cv2.resize(imgL, (1280, 720))
   img_und_L = cv2.undistort(imgL, l_cam_mat, l_dist_coef)

   img = np.concatenate((img_und_L, img_und_R), axis=1)
   return img


# find the contours along the white regions of the processed_image
def find_contours(processed_image):
   # Find contours on the processed image
   contours, _ = cv2.findContours(processed_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
   return contours


# Find the bounding boxes around the contours (a list) that is fed to it
def find_bounding_boxes(contours):
   # Store all bounding boxes in the 'boxes' array
   boxes = []
   for contour in contours:
       x, y, w, h = cv2.boundingRect(contour)
       boxes.append((x, y, x + w, y + h))
   return boxes


# find the two boxes closest in size based on absolute size difference (to be replaced by finn's code once it works)
# Written by shae, replaced by Finns code (which was edited by shae as well)
# SEEMS TO BE WORKING WELL, BUT WOULD REQUIRE EXTENSIVE TESTING TO CONFIRM IN MANY DIFFERENT SCENARIOS
def find_closest_boxes_in_size(boxes, image_width):
   # Ensure there are at least two boxes for size comparison
   if len(boxes) < 2:
       raise ValueError("At least two boxes are required for size comparison.")

   min_size_difference = float('inf')  # Initialize with a large value
   closest_boxes = None

   # Iterate through all pairs of boxes
   for i in range(len(boxes)):
       box1 = boxes[i]

       for j in range(i + 1, len(boxes)):
           box2 = boxes[j]

           # Calculate the size difference between the two boxes
           size_difference = abs((box1[2] - box1[0]) * (box1[3] - box1[1]) - (box2[2] - box2[0]) * (box2[3] - box2[1]))

           # Print information for debugging
           # print(f"Box 1: {box1}, Box 2: {box2}, Size Difference: {size_difference}")

           # Check if one box is on the left half and the other on the right half
           if (((box1[0] + box1[2]) / 2 <= image_width / 2 and (box2[0] + box2[2]) / 2 > image_width / 2) or
                   ((box2[0] + box2[2]) / 2 <= image_width / 2 and (box1[0] + box1[2]) / 2 > image_width / 2)):
               # Ensure neither box spans across the middle of the image
               if ((box1[2] <= image_width / 2 and box2[0] >= image_width / 2) or
                       (box2[2] <= image_width / 2 and box1[0] >= image_width / 2)):
                   # Update the closest pair if the current pair meets the conditions
                   if size_difference < min_size_difference:
                       min_size_difference = size_difference
                       closest_boxes = (box1, box2)

   # Check if closest_boxes is still None before using len()
   if closest_boxes is not None:
       return closest_boxes
   else:
       return "No valid pair found."


# Simple function to find the center of boxes
def find_box_centers(list_of_boxes):
   box_centers = []
   for box in list_of_boxes:
       # Extract x and y coordinates
       x_coordinates = box[:, 0]
       y_coordinates = box[:, 1]
       # Calculate the center of the box
       center_x = np.mean(x_coordinates)
       center_y = np.mean(y_coordinates)
       # Append the center to the list
       box_centers.append((center_x, center_y))
   return box_centers


# Remove boxes that are too small
def remove_small_boxes(boxes, min_size):
   filtered_boxes = []

   for box in boxes:
       left, top, right, bottom = box
       width = right - left
       height = bottom - top

       # Calculate box size
       box_size = width * height

       # Check if box size is above the minimum size
       if box_size >= min_size:
           filtered_boxes.append(box)

   return filtered_boxes


# Remove any boudning boxes that are near ArUco markers
def filter_rectangles_near_ARUCO(rectangles, marker_corners):
   # Check if marker_corners is empty
   if not marker_corners:
       return rectangles

   # Convert marker_corners to a numpy array
   marker_corners = np.array(marker_corners)

   # Calculate marker centers internally
   marker_centers = find_box_centers(marker_corners)

   # Calculate the maximum distance between x and y values of ARUCO marker corners
   max_x_distance = np.max(np.abs(marker_corners[:, :, 0] - marker_corners[:, :, 2]))
   max_y_distance = np.max(np.abs(marker_corners[:, :, 1] - marker_corners[:, :, 3]))
   ARUCO_threshold_distance = max(max_x_distance, max_y_distance) * ARUCO_threshold_distance_factor

   # Find the center of all detected ARUCO markers
   all_marker_centers = np.mean(marker_corners, axis=(1, 2))

   filtered_rectangles = []

   for rectangle in rectangles:
       # Extracting rectangle coordinates
       rect_left, rect_top, rect_right, rect_bottom = rectangle

       # Condition 1: Check if any ARUCO marker corner is within the rectangle
       flattened_corners = np.concatenate(marker_corners).reshape(-1, 2)
       aruco_inside_rectangle = any(
           rect_left < corner[0] < rect_right and
           rect_top < corner[1] < rect_bottom
           for corner in flattened_corners
       )

       # Condition 2: Check if any corner of the rectangle is within the threshold distance from the center of all detected ARUCO markers
       rect_corners = [(rect_left, rect_top), (rect_right, rect_top), (rect_left, rect_bottom),
                       (rect_right, rect_bottom)]
       aruco_inside_threshold_distance = any(
           np.linalg.norm(np.array(rect_corner) - np.array(aruco_center)) < ARUCO_threshold_distance
           for aruco_center in all_marker_centers
           for rect_corner in rect_corners
       )

       # If either condition is met, exclude the rectangle
       if aruco_inside_rectangle or aruco_inside_threshold_distance:
           continue

       # If neither condition is met, add the rectangle to the filtered list
       filtered_rectangles.append(rectangle)

   return filtered_rectangles


# Used to check if boxes are close enough to be merged in the "merge close boxes" funciton
def are_boxes_close(box1, box2, threshold):
   left1, top1, right1, bottom1 = box1
   left2, top2, right2, bottom2 = box2

   # Check if the horizontal distance between the boxes is within the threshold
   horizontal_condition = (right1 + threshold >= left2) and (left1 <= right2 + threshold)

   # Check if the vertical distance between the boxes is within the threshold
   vertical_condition = (bottom1 + threshold >= top2) and (top1 <= bottom2 + threshold)

   # Check if there is horizontal overlap or vertical overlap
   overlap_condition = (
           horizontal_condition and (bottom1 >= top2) and (top1 <= bottom2) or
           vertical_condition and (right1 >= left2) and (left1 <= right2)
   )

   # Check if both horizontal and vertical conditions are satisfied, or if there is overlap
   return overlap_condition or (horizontal_condition and vertical_condition)


# Look for boxes that are close to each other and merge them into one bounding box
def merge_close_boxes(boxes_to_merge, threshold_distance_for_merge):
   working_list_boxes = boxes_to_merge.copy()

   while True:
       merge_occurred = False
       updated_boxes = []

       # Iterate over all combinations of two boxes in the original set
       indices_to_keep = []

       for i in range(len(working_list_boxes)):
           if i not in indices_to_keep:
               box1 = working_list_boxes[i]
               current_box = list(box1)

               for j in range(i + 1, len(working_list_boxes)):
                   if j not in indices_to_keep:
                       box2 = working_list_boxes[j]

                       if are_boxes_close(box1, box2, threshold_distance_for_merge):
                           current_box[0] = min(box1[0], box2[0])
                           current_box[1] = min(box1[1], box2[1])
                           current_box[2] = max(box1[2], box2[2])
                           current_box[3] = max(box1[3], box2[3])

                           indices_to_keep.extend([i, j])
                           working_list_boxes.append(tuple(current_box))

                           merge_occurred = True
                           break

               if not merge_occurred:
                   updated_boxes.append(box1)

       working_list_boxes = [box for idx, box in enumerate(working_list_boxes) if idx not in indices_to_keep]

       if not merge_occurred:
           break

       updated_boxes.extend(working_list_boxes)
       working_list_boxes = updated_boxes

   return working_list_boxes


# Return only the left box from a list of two boxes
def find_left_box(box_data):
   if len(box_data) == 2:
       box1, box2 = box_data
       if box1[0] < box2[0]:
           print(box1)
           return box1
       else:
           print(box2)
           return box2
   elif len(box_data) == 1:
       box = box_data[0]
       if box[0] < 1280:
           print(box)
           return box
       elif box[0] > 1280:
           left, top, right, bottom = box
           left = left-1280
           right = right-1280
           box = (left, top, right, bottom)
           print(box)
           return box
   else:
       return (1,1,1,1)

#######################################
# FUNCTIONS WE MAY NOT NEED FOR FINAL
#######################################
# find the contours along the white regions of the processed_image, and draw them on the original_image
def find_and_draw_contours(original_image, processed_image):
   # Create a copy of the input image to draw contours on
   image_with_contours = original_image.copy()

   # Find contours on the processed image
   contours, _ = cv2.findContours(processed_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

   # Draw contours on the copy of the input image
   cv2.drawContours(image_with_contours, contours, -1, (0, 255, 0), line_thickness)

   return image_with_contours, contours

def draw_boxes_and_display(image, boxes, name):
   # Draw bounding boxes on the image for all contours
   for box in boxes:
       if len(box) == 4:
           x1, y1, x2, y2 = box
           cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), line_thickness)
   cv2.imshow(name, resize_for_display(image, resize_factor))
   cv2.waitKey(0)
   cv2.destroyAllWindows()
   return image

def draw_boxes_overlapping(image, boxes, name):
   # Draw bounding boxes on the image for all contours
   for box in boxes:
       if len(box) == 4:
           x1, y1, x2, y2 = box
           cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), line_thickness)
   cv2.imshow(name, resize_for_display(image, resize_factor))
   cv2.waitKey(0)
   cv2.destroyAllWindows()
   return image



def process_img(img_to_process, kernol_size_blur, threshold_after_blur, kernol_size_morph_open,
               kernol_size_morph_close):
   # SATURATE, BLUR, THRESHOLD, OPEN MORPH, CLOSE MORPH

   # convert to HSV and extract saturation channel
   sat = cv2.cvtColor(img_to_process, cv2.COLOR_RGB2HSV)[:, :, 1]
   # apply Gaussian blur
   blur = cv2.GaussianBlur(sat, (kernol_size_blur, kernol_size_blur), cv2.BORDER_DEFAULT)
   # threshold
   thresh = cv2.threshold(blur, threshold_after_blur, 255, 0)[1]
   # apply morphology open and close to fill interior regions in the mask and remove external static
   kernel_open = np.ones((kernol_size_morph_open, kernol_size_morph_open), np.uint8)
   opening_morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel_open)
   kernel_close = np.ones((kernol_size_morph_close, kernol_size_morph_close), np.uint8)
   opening_and_closing_morph = cv2.morphologyEx(opening_morph, cv2.MORPH_CLOSE, kernel_close)

   output_img = opening_and_closing_morph

   return output_img

def boxes_percentage_difference_size(filtered_rectangles, full_width, similarity_threshold=20):
   # Initialize variables to store indices of similar-sized boxes
   similar_boxes_pairs = []

   # Keep track of the boxes on the left and right
   boxes_on_left = []
   boxes_on_right = []

   for box in filtered_rectangles:
       # Check if the box is on the left or right based on its position relative to half the image width
       if box[2] <= full_width / 2:
           boxes_on_left.append(box)
       elif box[0] >= full_width / 2:
           boxes_on_right.append(box)

   # Loop through the boxes on the left and right to find similar-sized pairs
   for i in range(len(boxes_on_left)):
       for j in range(len(boxes_on_right)):
           box1 = boxes_on_left[i]
           box2 = boxes_on_right[j]

           # Calculate the area of the boxes
           area1 = (box1[2] - box1[0]) * (box1[3] - box1[1])
           area2 = (box2[2] - box2[0]) * (box2[3] - box2[1])

           # Calculate the percentage difference in size
           percentage_difference = abs(area1 - area2) / max(area1, area2) * 100

           # Check if the percentage difference is below the threshold
           if percentage_difference < similarity_threshold:
               similar_boxes_pairs.append((box1, box2))

   # Draw merged bounding boxes on the original image with different colors for each pair
   # for i, pair in enumerate(similar_boxes_pairs):
   #     color = (0, 0, 255 - i * 200)  # Adjust color intensity based on pair index
   #     for box in pair:
   #         x, y, x2, y2 = box
   #         cv2.rectangle(img, (x, y), (x2, y2), color, 6)

   # # Resize the image for display
   # resized_img_with_filtered_boxes = resize_for_display(img, img_red_fact)

   # # Display the image with filtered merged bounding boxes
   # cv2.imshow("Image with Filtered Merged Bounding Boxes", resized_img_with_filtered_boxes)
   # cv2.waitKey(0)
   # cv2.destroyAllWindows()

   # # Save the image with filtered bounding boxes
   # save_image(resized_img_with_filtered_boxes, "original_after_rejected_ARUCO_final")

   return similar_boxes_pairs

########################################################################################################################
# MAJOR functions used in this script
########################################################################################################################
# Main function # 1 - preprocessing before distance estimation
def Intake_image_Adaptive_output_one_boxes(img, show_images, save_img_q):
   # Calibrate Image
   # img_orig = Calibrate_and_resize_image(img, calib_data_path)

   img_orig = img

   if show_images:
       cv2.imshow("image", resize_for_display(img,resize_factor) )
       cv2.waitKey(0)
       cv2.destroyAllWindows()
   if save_img_q:
       save_image(img, "orig_image (1)")

   # get calibrated image dimensions (used later)
   full_height = img_orig.shape[0]
   full_width = img_orig.shape[1]
   orig_dim = (full_height, full_width)

   # Cut off the top percentage (cropped factor) of the image (no information up there)
   cropped_image, cropped_dim = crop_bottom_scaled(img_orig, cropped_factor)

   ####################################################################################################################
   # Adaptive thresholding and other operations to process the image in preparation for contour finding
   processed_img = ADPT_HA.process_img_adaptive(cropped_image, kernol_size_blur, blockSize_adaptive, C_adaptive, kernol_size_morph_open, kernol_size_morph_close, min_contour_area, show_images, save_images)

   # processed_img = process_img(cropped_image, kernol_size_blur, threshold_after_blur, kernol_size_morph_open,
   #             kernol_size_morph_close)

   if show_images:
       cv2.imshow("Processed Image (after hayden)", resize_for_display(processed_img.copy(),resize_factor))
   if save_img_q:
       save_image(processed_img, "processed_img (8)")

   # Find the contours that we will then draw bounding boxes around
   # contours = find_contours(processed_img)
   img_with_contours, contours = find_and_draw_contours(cropped_image, processed_img)

   if show_images:
       cv2.imshow("image with contours", resize_for_display(img_with_contours.copy(),resize_factor))
       cv2.waitKey(0)
       cv2.destroyAllWindows()
   if save_img_q:
       save_image(img_with_contours, "image with contours (9)")
   ####################################################################################################################


   # Find bounding boxes
   boxes = find_bounding_boxes(contours)

   if show_images:
       img_with_bounding_boxes = draw_boxes_and_display(cropped_image.copy(), boxes, "bounding boxes")
   if save_img_q:
       save_image(img_with_bounding_boxes, "img_with_bounding_boxes around contours (10)")

   # Remove small boxes
   big_boxes = remove_small_boxes(boxes, min_box_size)

   if show_images:
       image_with_bounding_boxes_small_boxes_cut = draw_boxes_and_display(cropped_image.copy(),
                                                                     big_boxes, "big boxes round 1")
   if save_img_q:
       save_image(image_with_bounding_boxes_small_boxes_cut,
                  "image_with_bounding_boxes_small_boxes_cut (11)")

   ###########################################################################################
   # if no boxes are detected on the far right, remove it, otherwise, leave it.
   no_middle_box = ADPT_HA.middleBoxCheck(big_boxes)
   ###########################################################################################

   if show_images:
       no_middle_box = draw_boxes_and_display(cropped_image.copy(), no_middle_box, "after middle box")
   if save_img_q:
       save_image(no_middle_box, "no_middle_box (12)")
   # Detect and Remove consideration for ArUco Marker Area
   marker_dict = aruco.getPredefinedDictionary(aruco.DICT_4X4_1000)
   parameters_manual = aruco.DetectorParameters()
   parameters_manual.minMarkerDistanceRate = 0.015
   marker_corners, marker_IDs, reject = aruco.detectMarkers(cropped_image, marker_dict, parameters=parameters_manual)
   marker_corners, marker_IDs, reject = aruco.detectMarkers(cropped_image, marker_dict)

   # print('marker corners', marker_corners)
   ####################################################################################################################
   # plot markers and rejects
   img_w_markers = cv2.aruco.drawDetectedMarkers(image=cropped_image.copy(), corners=reject, borderColor=(0, 0, 255))

   if show_images:
       cv2.imshow("Markers_rejected", resize_for_display(img_w_markers, resize_factor))
   if save_img_q:
       save_image(img_w_markers, "rejected_markers_img (13)")

   img_w_markers = cv2.aruco.drawDetectedMarkers(image=cropped_image.copy(), corners=marker_corners, ids=marker_IDs,
                                                 borderColor=(0, 255, 0))

   ####################################################################################################################
   if show_images:
       cv2.imshow("Markers_accepted", resize_for_display(img_w_markers, resize_factor))
       cv2.waitKey(0)
       cv2.destroyAllWindows()
   if save_img_q:
       save_image(img_w_markers, "accepted_markers_img (14)")

   # Remove rectangles near ArUco markers
   filtered_rectangles = filter_rectangles_near_ARUCO(big_boxes, marker_corners)

   if show_images:
       ARCUO_cut = draw_boxes_and_display(cropped_image.copy(), filtered_rectangles,
                                          "after filtered rectangles ArUco")
   if save_img_q:
       save_image(ARCUO_cut, "ARCUO Cut images (15)")

   # Merge All Boxes Left
   # Merge bounding boxes iteratively
   merged_boxes = merge_close_boxes(filtered_rectangles, threshold_distance)

   if show_images:
       merged_boxes_drawn = draw_boxes_and_display(cropped_image.copy(), merged_boxes, "merged boxes")
   if save_img_q:
       save_image(merged_boxes_drawn, "merged_boxes_drawn (16)")
   # Remove small boxes a second time
   actual_big_boxes = remove_small_boxes(merged_boxes, min_box_size_after_merge)

   if show_images:
       snd_round_big_cut = draw_boxes_and_display(cropped_image.copy(), actual_big_boxes,
                                                  "actual_big_boxes (after merged and size cut)")
   if save_img_q:
       save_image(snd_round_big_cut, "snd_round_big_cut (17)")

   if len(actual_big_boxes) > 2:
       merged_boxes_cut = find_closest_boxes_in_size(actual_big_boxes, full_width)
       # merged_boxes_cut = boxes_percentage_difference_size(actual_big_boxes, img, full_width, similarity_threshold=20)
   else:
       merged_boxes_cut = actual_big_boxes  # already down to minimum boxes

   if show_images:
       best_boxes_in_size = draw_boxes_and_display(cropped_image.copy(), merged_boxes_cut,
                                                   "closest boxes in size boxes")
   if save_img_q:
       save_image(best_boxes_in_size, "best_closest_boxes_in_size (18)")

   overlapped_box = ADPT_HA.overlappedBox(merged_boxes_cut, cropped_image.copy(), show_images, save_img_q)

   if show_images:
       overlapped_box_drawn = draw_boxes_and_display(cropped_image.copy(), overlapped_box, "overlapped boxes")
   if save_img_q:
       save_image(overlapped_box_drawn, "overlapped boxes (20)")

   # if show_images:
   #     both_boxes_left_side = draw_boxes_overlapping(cropped_image.copy(), boxes, name)
   # if save_img_q:
   #     both_boxes_left_side = draw_boxes_overlapping(image, boxes, name)

   left_box = find_left_box(overlapped_box)

   # print('left box is:',left_box)

   print("left box",left_box)
   if left_box != 1:
       x1, y1, x2, y2 = left_box
       drawn = cv2.rectangle(cropped_image.copy(), (x1, y1), (x2, y2), (0, 255, 0), line_thickness)

       if show_images:
           cv2.imshow("left box", resize_for_display(drawn, resize_factor))
           cv2.waitKey(0)
           cv2.destroyAllWindows()
       if save_img_q:
           save_image(drawn, "Final Box (21)")

       # print("left box is:", left_box)

       if left_box == 1:
           return img_orig, 1, orig_dim

       # Readjust boxes so the returned box is in original coordinates (before the cropping happened)
       height_to_add = full_height * (1 - cropped_factor)

       top = int(height_to_add + left_box[1])
       bottom = int(height_to_add + left_box[3])

       adjusted_left_box = (left_box[0], top, left_box[2], bottom)

       return img_orig, adjusted_left_box, orig_dim
   else:
       return img_orig, 1, orig_dim


# Actually find the position of the obstacle relative to the camera
def find_position_of_obst(distance_from_camera_plane, box, orig_dim, horiz_FOV):
   H = distance_from_camera_plane / math.sin(
       (math.pi / 2) - (horiz_FOV / 2))  # Length of side of FOV to target (at given distance)

   height, full_width = orig_dim

   width_m = 2 * H * math.sin(horiz_FOV / 2)  # Set theta = 45 degrees, we have H, width is 2 * sin(FOV/2) * H
   width_p = full_width / 2

   left, top, right, bottom = box  # in pixels
   center_x = ((right - left) / 2) + left  # in pixels
   fraction_along_img = center_x / width_p

   ####################################################################################################################
   # print("fraction along image", fraction_along_img)

   m = fraction_along_img * width_m  # NEEDS TO BE CONVERTED TO METERS SOMEHOW
   n = (1 - fraction_along_img) * width_m  # NEEDS TO BE CONVERTED TO METERS SOMEHOW

   D = math.sqrt(((H ** 2) + (n ** 2)) - (2 * H * n * math.cos(horiz_FOV / 2)))

   theta = math.acos(((D ** 2) + (H ** 2) - (n ** 2)) / (2 * D * H))

   x = D * math.sin((horiz_FOV / 2) - theta)
   y = distance_from_camera_plane

   relative_position = (x, y)

   return relative_position

def calibrate_image(img, calib_data_path):
   L_cam_mat = np.load(calib_data_path + '/Left_calibration_matrix.npy')
   L_dist_coef = np.load(calib_data_path + '/Left_distortion_coefficients.npy')
   R_cam_mat = np.load(calib_data_path + '/Right_calibration_matrix.npy')
   R_dist_coef = np.load(calib_data_path + '/Right_distortion_coefficients.npy')

   imgL, imgR = split_image(img)
   img_und_R = cv2.undistort(imgR, R_cam_mat, R_dist_coef)
   img_und_L = cv2.undistort(imgL, L_cam_mat, L_dist_coef)
   Fullframe = np.concatenate((img_und_L, img_und_R), axis=1)

   return Fullframe


def run_code(img, show_images_q, show_final_images_q, save_img_q):

   # To interface with Adrian's codes
   img_processed, adjusted_left_bx, original_dimensions = Intake_image_Adaptive_output_one_boxes(img, show_images_q, save_img_q)
   
   if adjusted_left_bx == 1:
       return np.nan, np.nan
   else:
       adjusted_left_bx = ADPT_HA.RatioChecker(adjusted_left_bx)

   # adjusted_left_bx_packaged = [adjusted_left_bx]
   # print('adjusted_left_bx', adjusted_left_bx)
   # print('adjusted_left_bx', adjusted_left_bx_packaged)
   # img_with_adj_box = draw_boxes(img_calib_for_disp, adjusted_left_bx_packaged)
   # cv2.imshow('img_with_adj_box', resize_for_display(img_with_adj_box, scale_percent))
   # cv2.waitKey(0)
   # cv2.destroyAllWindows()

   if adjusted_left_bx == 1:
       return (np.nan, np.nan)
   else:

       left, top, right, bottom = adjusted_left_bx

       box_over_center = check_if_over_center(adjusted_left_bx)

       if box_over_center:
           return np.nan, np.nan
       else:
           # print("before distance calculated")
           # print("Adjusted Left Box", adjusted_left_bx)

           # cv2.imshow("final box", img)
           # cv2.waitKey(0)
           # cv2.destroyAllWindows()

           drawn = cv2.rectangle(img, (left, top), (right, bottom), (0,255,0),8)

           if show_images_q:
               cv2.imshow("final box", resize_for_display(drawn, 55))
               cv2.waitKey(0)
               cv2.destroyAllWindows()

           if save_img_q:
               save_image(drawn, "Final_Box")

           if show_final_images_q:
               cv2.destroyAllWindows()
               cv2.imshow("final box", resize_for_display(drawn, 55))
               cv2.waitKey(1)

           if bottom > 1205:
               bottom = 1205

           distance = SDM_AC.find_obstacle_depth(img_processed, left, top, right, bottom, False, False, False, False)

           if distance == 1000:
               return (np.nan, np.nan)

           else:
               # print("after distance calculated")
               relative_position = find_position_of_obst(distance, adjusted_left_bx, original_dimensions, field_of_view_horiz)
               relative_position_z = relative_position[1] + 0.22
               relative_position_x = relative_position[0] - 0.07

               relative_position = (relative_position_x, relative_position_z)
               ####################################################################################################################
               # print("Position of obstacle is:", relative_position)

               return relative_position


def check_if_over_center(box):
   left, top, right, bottom = box
   if right > 2208:
       print("box overlaps center, image rejected")
       return True
   elif left <= 2208:
       print("box is good, image accepted", box)
       return False


########################################################################################################################
# ACTUALLY RUNNING THE CODE
########################################################################################################################
# cap = cv2.VideoCapture(1)
# cap.set(cv2.CAP_PROP_FRAME_WIDTH, 2560)
# cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
# cap.set(cv2.CAP_PROP_FPS, 30)  # Set FPS Rate

# # Create a ZED camera object
# zed = sl.Camera()
#
# # Set configuration parameters
# init_params = sl.InitParameters()
# init_params.camera_resolution = sl.RESOLUTION.HD2K
# init_params.camera_fps = 15
#
# # Open the camera
# err = zed.open(init_params)
# if err != sl.ERROR_CODE.SUCCESS:
#     exit(-1)





# cap = cv2.VideoCapture(1)
# cap.set(cv2.CAP_PROP_FRAME_WIDTH, 4416)
# cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1242)
# cap.set(cv2.CAP_PROP_FPS, 10)  # Set FPS Rate

# show_images = False
# show_final_images = True
#
# while True:
#     ret, Fullframe = cap.read()
#     if not ret:
#         break  # check if imported correctly
#     # Fullframe = cv2.imread(img_path)
#
#     height = Fullframe.shape[0]
#     width = Fullframe.shape[1]
#
#     # print("height is:", height)
#     # print(width)
#
#     # cv2.waitKey(0)
#
#     position_of_obstacle = run_code(Fullframe, calib_data_path, show_images, show_final_images)
#
#     # cv2.destroyAllWindows()
#     # cv2.imshow("box", resize_for_display(cv2.rectangle(Fullframe, (right, top), (right, bottom), (0, 255, 0), 8),34))
#
#
#     print("Position of the Obsticle is", position_of_obstacle)


show_images = True
save_images = True #FOR IT TO WORK SHOW IMAGES MUST BE TRUE

show_final_images = False

# Read Image
img = cv2.imread(img_path)

print(img.shape)

img = calibrate_image(img, calib_data_path)

position_of_obstacle = run_code(img, show_images, show_final_images, save_images)




# # Read Image
# img = cv2.imread(img_path)
#
# # cv2.imshow("image", resize_for_display(img, 55))
# # cv2.waitKey(0)
# # cv2.destroyAllWindows()
#
# img_calib_for_disp = Calibrate_and_resize_image(img, calib_data_path)
#
# # To interface with Adrian's codes
# img_processed, adjusted_left_bx, original_dimensions = Intake_image_Adaptive_output_one_boxes(img)
# adjusted_left_bx_packaged = [adjusted_left_bx]
# print('adjusted_left_bx', adjusted_left_bx)
# print('adjusted_left_bx', adjusted_left_bx_packaged)
# img_with_adj_box = draw_boxes(img_calib_for_disp, adjusted_left_bx_packaged)
# cv2.imshow('img_with_adj_box', resize_for_display(img_with_adj_box, scale_percent))
# cv2.waitKey(0)
# cv2.destroyAllWindows()

# location = run_code(img, calib_data_path, False)
# print("Position of obstacle is:", location)

# left, top, right, bottom = adjusted_left_bx
#
# # right = 1491
#
# check_box = (left, top, right, bottom)
#
# is_box_over_center = check_if_over_center(check_box)
#
# print("Box is over the center:", is_box_over_center)
#
# distance = SDM_AC.find_obstacle_depth(img_processed, left, top, right, bottom, False, False, False)
#
# # print("Distance is:", distance)
#
# position_of_obstacle = find_position_of_obst(distance, adjusted_left_bx, original_dimensions, field_of_view_horiz)
#
# print("Position of obstacle is:", position_of_obstacle)

Purpose

  • This final version of obstacle detection code detects obstacles in camera images, creating bounding boxes around them. It aims to locate obstacles and their sizes relative to the camera's viewpoint, allowing safe navigation for the chaser platform. Additionally, it filters out bounding boxes near ArUco markers to avoid detecting the target platform as an obstacle.

Inputs

  • Image
    • This image contains the lab environment where the obstacle detection and localization are performed.
  • Calibration Data
    • Information about the camera's intrinsic parameters (e.g., distortion coefficients, calibration matrices) to correct lens distortions and ensure accurate measurements.
  • Configuration Parameters
    • Parameters for image processing operations, such as blur kernel size, threshold values, morphological operation kernel sizes, minimum contour area, and similarity threshold for box size comparison.
  • ArUco Marker Dictionary
    • A predefined dictionary used for detecting ArUco markers, to ignore any detected obstacles near these markers.

Outputs

  • Processed Image
    • This is the resulting image after undergoing preprocessing steps such as saturation extraction, Gaussian blur, thresholding, and morphological operations.
  • Detected Obstacle
    • Information about the position and size of the detected obstacle relative to the camera's viewpoint. This includes the coordinates of the bounding box around the obstacle and its distance from the chaser platform.
  • Visualization
    • Optional visualizations, such as images with overlaid bounding boxes around the detected obstacle, images showing rejected ArUco markers, or images illustrating intermediate steps in the obstacle detection.

Click here to go BACK

⚠️ **GitHub.com Fallback** ⚠️