SVM for Face Recognition Using Python

In this article, we'll be getting a glance at Face Recognition using one of the best algorithms for facial recognition, The SVM, with Python.

Introduction

Face recognition became more popular today and is one of the most loved projects among programmers and machine learning buddies. Face recognition has been around since the late 1970s and early 1980s when the first algorithms were created by Marvin Minsky, Hiroaki Kitano, and others in order to solve this problem. However, in recent years it has become more and more popular, with many applications including the ability to unlock mobile phones through facial recognition software, detecting human faces in digital images and videos, Security services, and so on. That's why in this article, we'll be getting a glance at this technology using one of the best algorithms for facial recognition, The SVM algorithm, and the best language for AI and ML, The Python programming language.

Why SVM For Facial Recognition? 

SVM (Support Vector Machine) is an effective Machine Learning technique that can deal with really complex datasets including face recognition datasets. Due to its ability to extract intrinsic properties of higher dimensional datasets, it is widely used in many applications, such as computer vision, biometrics, and speech recognition

In the field of facial recognition, SVM has been shown to be superior to other techniques in terms of accuracy and robustness. SVM works well with small datasets, and it is also able to handle complex patterns and noisy data. This makes it an ideal algorithm for facial recognition.

Furthermore, SVM is extremely efficient, meaning it can process large amounts of data quickly. This makes it suitable for real-time applications, such as facial recognition in security systems or in image databases. It is also able to easily adjust parameters, making it highly versatile for a range of applications.

However, today face recognition systems are built using deep learning algorithms like Convolutional Neural Networks, which have proven more accurate than SVM. One major downside of these networks is their high computational complexity which makes them unsuitable for real-time systems requiring high throughput and low latency; hence the use of SVM for facial recognition in some cases. 

Researchers are working on developing deep learning algorithms that will perform equally well or better than traditional methods at both tasks (low latency / high throughput). They are looking at different architectural designs, nonlinear activation functions, and weight regularization strategies among other things.

The Face Dataset

Alright, now let's go into the coding part, first, we need to import the face data for the purpose of face recognition, Scikit-Learn provides inbuilt face recognition datasets including the popular one, the LFT(Labelled  Faces in The Wild) dataset. This dataset consists of around 14000 labeled images with each having a label associated with it. These images are of human faces and they have been manually annotated by researchers from around the world. 

Importing the Face Dataset

Now let's import the face dataset using sklearn

from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)

faces.DESCR
If you want to know more about the dataset just type faces.DESCR

Plotting the Faces

Let's plot some of the faces to understand the dataset. This can be done using the matplotlib imshow method.

import matplotlib.pyplot as plt

fig, splts = plt.subplots(2, 4)
for i, splts in enumerate(splts.flat):
    splts.imshow(faces.images[i], cmap='magma')
    splts.set(xticks=[], yticks=[],
            xlabel=faces.target_names[faces.target[i]])

Sample Faces

Splitting the dataset

from sklearn.model_selection import train_test_split

X = faces.data
y = faces.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

Training and Testing the model

Now comes the important part, we are using the SVC(Support Vector Classifier) in order to classify these faces into their corresponding categories. Here, the faces.data consists of the matrix representation of all the faces of the dataset which can be understood by the SVC algorithm.

But before training the model, we have to do something with the dataset. The dataset may consist of noises and unwanted information. To avoid this we can use Dimensionality Reduction techniques which help us to reduce the complexity of the dataset and only consider important information rather than unwanted information. This can be done using PCA(Principal Component Analysis)

To do this let's import SVC and RandomizedPCA from sklearn

from sklearn.svm import SVC
from sklearn.decomposition import PCA as RandomizedPCA
from sklearn.pipeline import make_pipeline

# For dimensionality reduction
pca = RandomizedPCA(n_components=150, whiten=True, random_state=42)
svc = SVC(kernel='rbf', class_weight='balanced')
model = make_pipeline(pca, svc)
Here we are using a pipeline. A pipeline in machine learning can accommodate multiple steps into one single step for automating the machine learning workflow. Here the pipeline can be used to accommodate PCA and SVC into a single model.

model.fit(X_train, y_train)

Testing the model

from sklearn.metrics import accuracy_score

predictions = model.predict(X_test)

accuracy_score(predictions, y_test)

---

0.8203703703703704
The accuracy score is around 82% which is not bad. However, there are still possibilities to increase the accuracy of the model. You can do a Grid Search to find the best parameters for the SVC algorithm.

Getting The Misclassified Names

It is sure that the model misclassified some of the faces, so to see the names of the misclassified persons you can compare both predictions made by the model and actual values, 

from colorama import Fore

incorrect = 0

length = len(predictions)

print("Actual\t\t\t\tPredicted\n")

for i in range(len(predictions)):
    if predictions[i] != y_test[i]: # if predictions and actual values are not equal
        prediction_name = faces.target_names[predictions[predictions[i]]] # Getting the predicted name
        actual_name = faces.target_names[y_test[y_test[i]]] # Getting the actual name
        incorrect+=1
        print("{}\t\t\t{}".format(Fore.GREEN + actual_name, Fore.RED+prediction_name))
       
print("{} are classified as correct and {} are classified as incorrect!".format(length-incorrect, incorrect))
Here we are comparing the predictions and actual values and then printing those which are misclassified. Also, we can identify the number of correct classifications and misclassifications as well.


Classification Report

By looking at the classification report we can figure out the precision, recall, f1_score, accuracy, etc of the classification performed. If you want to have an understanding of these metrics check this article:
from sklearn.metrics import classification_report
print(classification_report(predictions, y_test, digits=2))

----


                precision    recall  f1-score   support

           0       0.70      0.67      0.68        24
           1       0.87      0.78      0.82       121
           2       0.82      0.79      0.80        52
           3       0.90      0.86      0.88       213
           4       0.69      0.84      0.76        32
           5       0.68      0.96      0.79        24
           6       0.76      0.84      0.80        19
           7       0.68      0.75      0.71        55

    accuracy                           0.82       540
   macro avg       0.76      0.81      0.78       540
weighted avg       0.83      0.82      0.82       540
If you want to know more about the correct classification and misclassification, you can look at the confusion matrix.

from sklearn.metrics import confusion_matrix

matrix = confusion_matrix(predictions, y_test)

matrix

----

array([[ 16,   4,   2,   1,   0,   1,   0,   0],
       [  3,  94,   2,   8,   3,   3,   3,   5],
       [  3,   1,  41,   6,   1,   0,   0,   0],
       [  1,   5,   3, 184,   3,   3,   2,  12],
       [  0,   0,   1,   0,  27,   3,   0,   1],
       [  0,   0,   0,   1,   0,  23,   0,   0],
       [  0,   1,   0,   1,   0,   0,  16,   1],
       [  0,   3,   1,   4,   5,   1,   0,  41]], dtype=int64)
The diagonal of the confusion matrix shows the number of successful classifications that is equal to the labels provided. A perfect classifier has a confusion matrix in which all the values are zero except diagonals.

Articles to read: