Feature Matching - iffatAGheyas/computer-vision-handbook GitHub Wiki

🔗 Feature Matching (Brute-Force & FLANN)

Once you’ve detected keypoints and extracted descriptors from two images, feature matching finds correspondences between them. This step is critical for alignment, recognition and 3D reconstruction.


📌 Real-World Applications

Application Description Role of Feature Matching
Panorama Stitching Merge overlapping photos into a wide-angle view Detect & match overlapping keypoints; estimate homography
Object Recognition Locate known objects (logos, faces, products) Match template descriptors to scene descriptors; confirm detection
Structure-from-Motion Reconstruct 3D scenes from 2D images Match across views; triangulate 3D points
Augmented Reality Overlay virtual objects on the real world in real time Track surfaces/features; estimate camera pose

🔍 Matching Strategies

Two main approaches:

Method How It Works Speed Accuracy
Brute-Force Compare each descriptor in A with all descriptors in B Slower High
FLANN Approximate nearest-neighbour search (float descriptors) Faster Slightly less accurate

Descriptor Compatibility

Matcher Descriptor Type Supported Algorithms
BFMatcher Binary & Float ORB, SIFT, SURF
FLANN Float only SIFT, SURF

🧪 Brute-Force Matcher (ORB Example)

import cv2
import matplotlib.pyplot as plt

# Load grayscale images
img1 = cv2.imread("bird1.jpg", cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread("bird2.jpg", cv2.IMREAD_GRAYSCALE)

# ORB detector
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)

# Brute-Force matcher with Hamming distance
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance)

# Draw top 50 matches
matched_img = cv2.drawMatches(
    img1, kp1, img2, kp2, matches[:50], None, flags=2
)

plt.figure(figsize=(15, 5))
plt.imshow(matched_img, cmap="gray")
plt.title("Brute-Force ORB Matching")
plt.axis("off")
plt.show()

image

🧪 FLANN Matcher (SIFT + Ratio Test)

import cv2
import matplotlib.pyplot as plt

# Load grayscale images
img1 = cv2.imread("bird1.jpg", cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread("bird2.jpg", cv2.IMREAD_GRAYSCALE)

# SIFT detector
sift = cv2.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)

# FLANN matcher setup
index_params = dict(algorithm=cv2.FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# KNN match and Lowe's ratio test
matches = flann.knnMatch(des1, des2, k=2)
good = [m for m, n in matches if m.distance < 0.7 * n.distance]

# Draw good matches
matched_img = cv2.drawMatches(
    img1, kp1, img2, kp2, good, None, flags=2
)

plt.figure(figsize=(15, 5))
plt.imshow(matched_img, cmap="gray")
plt.title("FLANN + SIFT Matching (Ratio Test)")
plt.axis("off")
plt.show()

image

📐 Key Concepts

Concept Description
crossCheck=True Ensure mutual matching (A→B and B→A)
distance Difference between descriptor vectors
Lowe’s Ratio Test Filters weak matches—improves match reliability

✅ Summary

Matcher Best For Notes
Brute-Force Small datasets, high accuracy Works with any descriptor
FLANN Large datasets, high speed Best for float descriptors (SIFT/SURF)