Facial Recognition Using Machine Learning Methods - ECE-180D-WS-2023/Knowledge-Base-Wiki GitHub Wiki

Introduction

Machine learning is a technology which involves the development of algorithms and statistical models that enable the machine to learn from past data. The goal of machine learning is to identify patterns and relationships in data. It is a rapidly growing field and has been used for a variety of tasks including image recognition, speech recognition, natural language processing, fraud detection, and medical diagnosis. Furthermore, machine learning can be used to train computer vision models to recognize and classify objects in images, such as detecting faces. Facial recognition is a biometric identification technology that uses the unique features of an individual’s face to identify them. Most facial recognition systems work by comparing facial impressions with a database of known faces. This article will introduce powerful tools, K-Means Clustering and PCA (Principal Component Analysis), that aids in implementing facial recognition.

Method 1: K-Means Clustering

Unsupervised learning is a type of machine learning technique in which the model is trained on a dataset without any specific supervision or labeled data. In other words, the goal of unsupervised learning is to identify patterns, structures, or relationships within the data, without any prior knowledge or information about the data. K-Means clustering algorithm is an unsupervised machine learning method which can identify clusters of objects in a set. There are many different types of clustering methods, but K-means is the most easily understood method. This algorithm first divides the data into K groups, randomly selects K objects as the initial cluster centers, and then calculates the distance between each object and each seed cluster center, assigning each object to the cluster center closest to it. After iterations, the cluster centers and the objects assigned to them represent a cluster. Typically, the algorithm continues iterating until the cluster centroids no longer move or until a maximum number of iterations is reached.

K-means clustering is widely used across diverse fields to detect patterns and categorize related data points. Typical real-world applications of the K-means algorithm encompasses image segmentation, customer segmentation, anomaly detection, and document clustering. To dive in deeper, consider one of these examples – customer segmentation. It is used to divide a company's customer data into subsets based on common attributes, including demographics, purchasing habits, or preferences, allowing them to create customized marketing strategies and enhance customer satisfaction (Punj, 1983). For example, using customer segmentation, a company can collect data on a customer's location. K-means can come in to find one cluster consisting predominantly of people living in cold areas and recommend cold weather clothes such as puffer jackets.

The algorithm provides multiple benefits, including its straightforward nature, effortless implementation, and adaptability to extensive datasets (Jain, 2010). Nonetheless, one complication in using K-means clustering is identifying the proper number clusters k as the value of k may depend on the dataset. Pre-specifying the value of k involves research and analysis of the dataset.

K-means Clustering Application in Facial Recognition

In facial recognition, data is contained in each individual’s face. With K-means clustering, similar facial data are attempted to be grouped together into k clusters. The way it works is that for each individual, data is collected from each person’s face and grouped into k clusters, resulting in each cluster representing a specific person.

Yasser Chihab’s “Face Recognition using K-means clustering” showed a good example of using K-means clustering in facial recognition. The dataset comprises 75 pictures of 5 football players (Messi, M.Salah. C.Ronaldo, Suarez, and Neymar). The dataset contains 15 pictures per player taken in different positions. A data extractor is used to convert these pictures into a format for training the K-means algorithm which is typically into a multidimensional array. For this specific application, data is stored in a 128 dimension vector representing each player’s faces.

Applying K-means into this dataset using Python and specifying that 5 clusters are needed for each player, the following scatter plot with 5 centroids represents the clustering of the faces. In this scatter plot, the smaller circles are the data points while the big 5 circles are the centroids for each cluster or player – labeled from 0 to 5.

With the labeled data above, the model can now be used to recognize faces of Messi, M.Salah. C.Ronaldo, Suarez, and Neymar.

Method 2: PCA (Principal component analysis)

Principal Component Analysis (PCA) is a mathematical technique used for reducing the dimensionality of a dataset. It transforms a set of correlated variables into a new set of uncorrelated variables, called principal components, in order to capture the most important features of the data in a smaller dimension. Principal components are linear combinations of the original variables and can have no more dimensions than the original data. The result of this transformation is a new set of observations that retain the key features of the original data, but are expressed in a lower dimension for easier analysis. These new variables are uncorrelated, which makes them ideal for presenting the characteristics of the data in a simplified form.

Real-world applications of PCA include data visualization, feature extraction, noise reduction, and image compression. A specific example of PCA is in spectroscopy data analysis. Spectroscopy is a scientific method used to study electromagnetic radiation which is the interaction of light with matter (Helmenstine, 2019). It uses wavelength and frequency to measure the light absorbed, emitted or dispersed by a matter. Because of this, it contains a large amount of variables. This is where PCA comes in as the said algorithm can take the complex data outputted by spectroscopy and do dimensional reduction and pattern recognition to it. PCA reduces dimensionality of data while preserving the majority of the variance. For this reason, a complicated n-dimensional data can be viewed in two- or three-dimensional representations. Thus, through PCA, scientists can detect trends and relationships in data like these, identifying the composition or quality of the matter.

PCA Application in Facial Recognition

Principal Component Analysis (PCA), a technique often utilized in facial recognition, finds substantial use with the AT&T Database of Faces. The database, featuring a wide variety of facial expressions and details under diverse lighting conditions, is a rich resource for these systems. By applying PCA to the AT&T Database of Faces, the system can extract the principal components that represent the most variance within the images - these might be certain features like the distance between the eyes, the size of the nose, the shape of the cheeks, etc.

In an example of the AT&T Database of Faces, images, each constituting a 64x64 pixel grid are extracted from the dataset. The visual provided above contains sixteen graphs that illustrate the appearance and structure of

PCA analysis is then applied to the entire data and prints out the 30 most relevant features to the original data. It can be seen that the image with the most relevant variance of 26.88% is an image of a blurred human face. In face recognition systems, PCA is used to convert a face image into a low-dimensional representation called an eigenface. Eigenfaces capture the most important features of the face, such as the shape of the eyes, nose, mouth, and other facial features, and discard less important information, such as texture and illumination. When a new face image is presented for recognition, the system first converts it into the eigenface space and compares it with the eigenfaces of known faces. The recognition process involves finding the closest match between the new face image and the known faces. The face image that is closest to the new image is considered to be the recognized face. PCA is an effective face recognition technique because it reduces the dimensionality of the data while preserving the important features of the face, making the recognition process more efficient and accurate.

The Combination of PCA and K-means

By integrating PCA and K-Means, it's possible to segment all the images into ten distinct clusters. The similarities among images within the same cluster become evident upon comparing the sum of absolute pixel values for each image. Conversely, significant differences emerge when comparing images across different clusters.

Conclusion

In summary, Principal Component Analysis (PCA) and K-Means are two techniques commonly used in facial recognition systems. PCA can reduce the dimensionality of a face image and retain important features, and can convert a high-dimensional face image into a low-dimensional representation called eigenface, which captures the most important features of a face. K-Means is used to cluster similar faces and perform the actual recognition process. When a new face image is presented for recognition, the system compares it with the eigenfaces of known faces and assigns it to the category that matches the closest.

The combination of PCA and K-Means provides an effective and efficient solution for face recognition. The dimensionality reduction provided by PCA helps to reduce the complexity of the recognition process, while the clustering provided by K-Means helps to group similar faces and perform the actual recognition.

Reference

  1. Arvai, Kervin. “K-Means Clustering in Python: A Practical Guide.” Real Python, Real Python, 30 Jan. 2023, https://realpython.com/k-means-clustering-python/.
  2. Chihab, Yasser. “Face Recognition Using K-Means Clustering.” Medium, Analytics Vidhya, 13 Feb. 2020, https://medium.com/analytics-vidhya/face-recognition-using-k-means-clustering-127c462e02f2.
  3. Education Ecosystem. “Understanding K-Means Clustering in Machine Learning.” Medium, 12 Sept. 2018, towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1.
  4. Helmenstine, Anne Marie. “What Spectroscopy Is and How It’s Different from Spectrometry.” ThoughtCo, 13 Sept. 2019, www.thoughtco.com/definition-of-spectroscopy-605676.
  5. Damkliang, Kasikrit. “AT&T Database of Faces.” Kaggle, 17 Dec. 2019, www.kaggle.com/datasets/kasikrit/att-database-of-faces.
  6. Jain, Anil K. “Data Clustering: 50 Years beyond K-Means.” Pattern Recognition Letters, vol. 31, no. 8, 2010, pp. 651–666., https://doi.org/10.1016/j.patrec.2009.09.011.
  7. Punj, Girish, and David W. Stewart. “Cluster Analysis in Marketing Research: Review and Suggestions For Application.” Journal of Marketing Research, vol. 20, no. 2, 1983, p. 134., https://doi.org/10.2307/3151680. UCLA M148 by Professor Lara Dolecek

Related code: https://github.com/WHarden/wiki_code/blob/main/wiki_code.ipynb