I’d like to talk about a model I am working on. To do this, I have to break down and define the meaning of all words within this title.
A result of muscles within the face contracting and releasing in a particular way to convey subconscious or conscious messages that relate to the person’s emotional state.
The act of being aware and perceptive to a person, place, event, or anything else that can be perceived through sense.
In Machine Learning, a model is a series of algorithms, code, and statistics implemented with the purpose of adapting to a set of data. The model is given massive amounts of data and must “learn” about the data without being reliant on the data used to train it. A good model can accurately interact with new information presented to it after training.
Facial Recognition models are trained with two folders. Both would contain a massive amount of pictures(can be labeled or not- it depends). One folder would be utilized for training the model and the other would be utilized for testing the model’s accuracy, recall, and precision when faced with new data outside of it’s training set of data.
Before creating a Facial Recognition Model, there are a few steps to prep for implementation. First, you need to import necessary libraries with preset methods to assist in data processing. TensorFlow and Keras are very popular in this field. Once you have your essential tools, you determine the size of your training and test set of data. You then scale the data to be as uniform as possible. Examples of this would be ensuring that all images are monochromatically colored and scaling the images to the same size format.
You then, create layers within an infrastructure called a Convolutional Neural Network (CNN). In a nutshell, the CNN breaks images down into vectors(one dimensional segments), analyzes them, and draws insight to any potential patterns. After setting several layers with customized hyperparameters, the model goes through several epochs of training. Once complete, the real test begins.
This trained model is then exposed to new data that it has never seen before, the test set of data. Based on these results, model efficiency becomes noticeable. It is worth noting that the more depth and time that the model is programmed to study the data, the higher the chance of what is called overfitting. Overfitting is an issue in Machine Learning models where the trained model replicates results to similar to the data making it unusable for any new data outside of the training set.
The two most confusing but crucial steps of this process are scaling and tweaking model parameters. I would advise spending a good amount of time on library documentation in regards to these two very important steps in creating a successful model using a Convolutional Neural Network.