Features - RicoJia/notes GitHub Wiki
========================================================================
========================================================================
-
Features are there to find matching points.
-
We need:
- Interest points identification
- Precisely repeatable on multiple similar images.
- Compact: covers a small area
- Much fewer features than pixels
- a descriptor. so that each corner, edge is unique!
- Interest points identification
-
Most feature descriptors are based on edge, corner formation. Colors are rarely used.
- Generally, image gradient on RGB planes are quite correlated to each other. So doing on one plane might be enough
========================================================================
========================================================================
-
non-maximal-suppresion: select non-overloapping objects. Algorithm:
- Come up with a bunch of windows and evaluate score
- Select the window with the highest score
- Remove windows with high IOU (Overlapping Union) scores with the selected window
- Put the window into the final set, so it becomes part of the output
- Repeat 2
- In Harris Corner Detection, you parse thru the corner list, and keep the one corner within any given window. This way, it looks more sparse. Link to Example
-
YOLO (You Only Look Once) also uses this
- reverse an array on a single dimension:
arr = np.array([1,2,3,4]) arr[::-1]
========================================================================
========================================================================
-
First Order Edge Detector - why kernel works
- When the gradient exceeds an amount, we get a smear.
- When the gradient exceeds an amount, we get a smear.
-
Second Order Edge Detector gives a clear location of the edge, instead of a smear from first order. We need to find zero-crossing
-
Effect of first order edge detector and second order
-
How to find Zero Crossing, and Laplacian Detector
-
Discrete form of Laplacian
-
LoG Laplacian of Gaussian - we need to smooth the image first, since LoG is sensitive to noise!
-
3 ways to get Kernel
-
DoG is simply an approximation of LoG
-
DoG, LoG can be used for detecting blobs (size similar to the kernel): LoG itself looks like a blob. When it's convoluting/cross-correlating with another blob, it will get a local maxima. Of course, there should be a halo around the blob, which comes from the impulse response. A dark blob will yield local maxima, A light-colored blob yields local minima
-
Effect of LoG in Blob Detection
-
-
Resources:
========================================================================
========================================================================
-
get some Gaussian Pyramid (Gaussian-blurred images with different sigmas), then the LoG Pyramid
-
get subsampling images: gaussian blur, then downsample (every other row, cln)
-
Expand the image, insert every other row and cln with zeros
-
In the expanded image, do gaussian blur to reduce high frequencies. 2,3 are
cv2.pyrup
.- why this work? gaussian is actually interpolating. Its effect is approximately gaussian blurring on the original image
-
-
Find Harris Corners on each image.
- Note: skip the harris corners's own smoothing
-
Non-Maximal Suppression
-
On the LoG pyramid, find the extremas compare against its 8 neighbors, 9 upper and lower neighbors.
- use LoG on scale space, even tho it detects blobness, is it gives the most consistent presentation of features.
- Subpixel level?
- do we need multiple GP for HL (yes, but just on the neighboring scales, so we can have three octaves, each octave has one interval, and use the middle octave for comparison)
-
represent the features in circles: r=sqrt(2)*sigma
-
Base image of one octave is 1/4 size of the previous one.
- Of course, Gaussian Blurring should be applied BEFORE subsampling, to reduce the high frequency noise.
- Gaussian Blur's sigma is 2 times larger
-
Within an octave, there're k intervals. Size is the same, sigma = 2^(n/k)
- in openCV, the default kernel is this: whose sigma is between (1, sqrt(2))
-
LoG is approximated by difference b/w two interval images (with two sigmas).
- LoG is a high pass filter, so we can see edges.
========================================================================
========================================================================
-
effect - can find feature matches
- Overview: step 2 and step 3 can be replaced by Harris Laplace
-
Scale-Invariant Extrema Detection
- just on neighboring DoGs, find local extremas in its neighbors.
-
Keypoint Localization
- second order approximation for localization?
-
Orientation Assignment
-
find the most dominant direction in a local feature region:
-
rotate all image gradients within the region
-
-
Descriptor
- each region is 4x4 pixels, each pixel has 8 directions, so there are 16x8 bins in the descriptor, each value is the magnitude * gaussian window function
- high magnitudes will be clipped, then normalize the descriptor?
-
Motivation
-
LoG has zero for edges, has a minima for a matching blob
-
But LoG response decays, as sigma increases
-
Because the gradient of gaussian becomes smaller as sigma increases
-
The laplacian has sigma2 in it, so we need to multiply it by sigma2? Called Scale Normalization
-
so comparing the corresponding keypoints on different LoGs is not as stable as this interpolation method, Lowe explained
-
-
Method: Find keypoints in the z=(x, y, sigma) space, by approximating the D(z), then finding Derivative = 0 points.
- sift implementation
- Nice lecture
- on Subpixel:
- good intro. Note the formula of D might be wrong
- good short question
========================================================================
========================================================================
========================================================================
========================================================================
-
Given two images of the same scene, let's find the transformation between the two images that gives us the best matching features
-
Transforms:
-
All these are "affine transforms": translation, rotation, shear, and scaling. Because it can be written as follows:
-
Affine transformations:
- Parallel lines remain parallel
- Line mapping are the same
- Ratios remain the same
-
in 2D, rotation has 2 dof, affine is
1+2+3
(translation, rotation, scaling, shearing, aspect). Homography is a projective transform. 8 dof.
-
========================================================================
========================================================================
-
Main Idea:
- Draw
N
sets, each set hass
points - Come up with a propose transform
- Apply the transform, see how many point are within a threshold
D
from the model. They are called inliers.- Points outside this threshold are called outliers
- If there're enough inliers,
- if the inlier ratio is rly high, then terminate
- else, refit from 2.
- Draw
-
N, s, e
-
s=1
for a line,s=3
for affine,s=4
for Homography -
N
: depends one
, probability of outliers of the best-fitting model. we solve for N, by setting K = 99%. Below is the probability of at least one set is good -
You can set e to a fixed value to begin with, but you can do "adaptive e" as well:
- Assume e = 1
- Find the first trasformation, find
e' = outliers/total
- if
e' < e
, update e. -
N=N+1
and Repeat
-
-
D
: Using "half normal distribution" -
After RANSAC, take the set of inliers of the best fitting model, then use least square to finalize the model. use the move the line up and down one more time, using Least Squares
-
Great things about RANSAC,
-
Is the percentage of outliers remain pretty low
-
N has nothing to do with the number of putative matches (the candidates)
-
TODO: how to find the desk plane from the objects using RANSAC? Image Segmentation
-
Works well with large number of outliers
-
Applications:
- Most robot vision problems
- Homography
- Fundamental Matrix Estimation
-
-
Bad things about RANSAC:
- Computational time grows a almost exponentially with the # of params.
- Not good for fitting multiple fits.
- REALLY NOT GOOD if your model is not what you think of (a plane, etc.)
Example:Given a list of points: (2,3),(5,4),(3,8),(8,8),(7,2),(6,3)(2,3),(5,4),(3,8),(8,8),(7,2),(6,3)
, with K=2 clusters
-
Randomly pick two centroids:
(2,3)
,(8, 8)
-
For each point, compare its distance to each centroid. Get
(2, 3) → Cluster 1 (5, 4) → Cluster 1 (3, 8) → Cluster 2 (8, 8) → Cluster 2 (7, 2) → Cluster 1 (6, 3) → Cluster 1
-
Find new centroids of each centroid, by calculating the new averages:
(2+5+7+6)/4,(3+4+2+3)/4=(5,3) (3+8)/2,(8+8)/2=(5.5,8)(3+8)/2,(8+8)/2=(5.5,8)
-
Repeat Assignment. If the result is the same, then the algorithm converges.
There's a chance kmeans could alternate. In that case, need to initialize differently.
- Example:
- Insert (7, 2)
- Insert (5, 4). (x) 5<7, so left of (7,2)
- Insert (9, 6). (x) 9>7, so right of (7,2)
- Insert (4, 7). (x) 4<7, so left of (7,2). (y): 7 >4, so right of (5,4)
(7, 2) / \ (5, 4) (9, 6) / \ / (2, 3) (4, 7) (8, 1)
- Find the nearest neighbor of (4.7, 7.1)
- Find each area's best possible distance: