Facial_Recognition - RicoJia/notes GitHub Wiki
-
Discriminative Model:
- Discriminative: gets a good decision boundary
- Say now you have a picture, what's the likelihood of a pixel being skin pixel? (Belong to a class?)
- Important note
-
Generative Model is purely probablistic, came from a time when computers are sparse. Trying to Build representative probability model.
- Build a model for each category. Then see which model fits the best. Discriminative: generate a decision boundary, model the posterior.
- Use mixture of Gaussians for modelling. Parameterized models are great for small training data.
- KNN is good for larger training data
- Other Advantages:
- Use prior knowledge, firm Probablistic Grounding
- Later models do not interfere with the previous ones.
- You can generate models as well!
- Disadvantages:
- where to get priors?
- Doesn't cover the hard cases.
- If you have lots of data, doesn't help.
- Work with low number of dimensions.
-
Definition
-
Set up a line: we need to normalize (a,b) first. We can write any function with a positive d. Errata: should be d
-
This axis, should be the axis for "least inertia" (inertia is proportional to their distance away from the axis.). Errata: below we mean to get least square error, not least mean square error, so no additional 1/n. This is equivalent to shifting the axis to (x_bar, y_bar), then getting projection onto (a,b)
-
Least Square Error Problem -> Maximizing Projection. The above can be seen as: minimizing projection onto n, with the origin being shifted to the mean. Note: B has been shifted to (x_bar, y_bar)
-
Another way to look at this is: we're getting the principle axes of the covariance matrix, which is
B'TB'
-
Definition of covariance matrix
-
-
Using Lagrange Multiplier, one can find that the axis is the eigenvector of the group of points. Errata: below del n should really be gradient, that it, del(x)
-
Finally, you can use some of the principle axes to represent points! The eigen-vectors with the largest eigen values correspond to the longest principle axes
-
Caveat: PCA is only good for a single class of points - cuz that's the principle axes of the ellipsoid!
-
Application in recognition: each NxM image is a point in NxM vector space, which is huge! So we want to get a linear subspace out of that
-
Trick for dimensionality: Each image is a point in the image space, (which is huge). How do we get the eigenvectors of M points? How many distinct eigen-vectors are there?
- M-1 eigen distinct vectors are there, think about: in 3D, if I shift my origin between 2 points, I get 1 ev, if there're 3 points, I get 2.
- Note in below derivation, A = B^T
We get distinct eigen vectors of AAT, then multiply them by A. There should be M-1 of them
-
Question: how did you get the direvative of the vectors?
-
Eigen Faces (1911, Turk, Pentland)
- Visualize the eigen vectors of those faces
- get mean and eigen vectors
- If we subtract an eigen-vector off of an image, or add it back on, this is what we see
- get the coefficients, then reconstruct the image
-
Notes
- It's a generative model because you can then classify a face that's closest to a class in the k-dimentional space.
- Working not too well if you mix face pics with other pics.
- Not working well with misalignment, say eyes are not in the same position
- Not working well with a distribution like this: