In a disturbing new identity theft trend, there have been several cases in which an intruder has managed to unlock a phone with a photo he took of its rightful owner from social networks. Similarly, with social media posts featuring a “thumbs up” gesture, it’s possible criminals could copy victims’ fingerprints and print it in silicone to unlock an access with the right technology. Moreover, it’s believed that the placement of subcutaneous chips, in or near the body, is increasingly being used worldwide to allow the exchange of information. This is carried out by means of two technologies: NFC-Near Field Communication, wireless and short-range; and RFID, which works by radio frequency, like the one used in electronic tolls.
Among other possible applications, chips are often used for pets to store medical history in the event it runs off and someone finds it. With this information, animal shelters and veterinary clinics can track down the pet’s rightful owner. In human beings, RFID technology is grafted for different functions, similar to those performed by a key ring or a credit card, such as making a payment, “checking in” at work, opening doors with digital locks. It works by contact or proximity and is the same technology used by “contactless” cards. RFID and NFC technology could help to offset some of the gaps of facial recognition technology, but widespread adoption has yet to be seen.
The Problems With Facial Recognition Technology
More than a year ago, an experiment was performed to evaluate the effectiveness of recognition systems with face shields, masks, paper or digital photos. Results indicated considerable errors, which is concerning when we think about the vast potential for different applications for facial recognition. Unlocking a cell phone versus using facial ID as a resource for an investigation has far different implications, for example. False positives occur often; for instance, when we evaluate algorithms we see that some may assign an incorrect gender. Every AI algorithm bases its intelligence gathered from the data that was used for its training. Thus, in the case of facial recognition mechanisms, AI systems have been trained with images of complete faces. Amid the COVID-19 pandemic, however, many people wore facial coverings, resulting in facial recognition failures.
Facial recognition systems work with deep learning, a form of AI that operates through multiple stacked layers of “artificial” or “simulated” neurons to process information. These neural networks are trained on thousands or even millions of examples of the types of problems the system is likely to encounter, allowing the model to “learn” how to correctly identify patterns in the data. Facial recognition systems use this method to isolate certain characteristics of a face that has been detected in an image, such as: the distance between eyes, eyebrows, the texture of an individual’s skin, the depth of our features, color, conjugated with the so-called thresholds or confidence points. A high degree of precision is only possible in ideal conditions where there is consistency in lighting and positioning, and where the facial features of the subjects are clear and unobtrusive.
In the real world, accuracy rates tend to be much lower, varying in high-quality photographs, compared to images of individuals “in nature,” where the subject may not look directly at the camera or may be obscured by objects or shadows. Aging is another factor that can affect error rates, as changes in people’s faces over time can make it difficult to match images taken many years apart.
External factors are most apparent when considering matching faces recorded on video surveillance images. In the case of masks, for instance, the possibility of matching becomes extremely unstable. When we add other barriers such as a hat or dark glasses, the results are even more unreliable.
Going Beyond Facial Recognition
Nowadays, new technology seeks to add increasing precision to facial recognition technology, and also has a much broader range of potential applications. For instance, it could be used to grant access to restricted areas in an office or building, for online banking, or for use of ATMs. C2FIV, Two Factor Concurrent Identity Verification, is a new form of technology that requires both facial identity and a specific movement of the face to gain access. The user must look at a camera and record a short video of one to two seconds of a single facial movement or a movement of his mouth and lips when reading a phrase that only he knows. Later, the video is integrated into the device, which extracts the facial features and facial movement features, storing them for identity verification.
C2FIV relies on an integrated neural network framework to learn facial features and actions at the same time. This framework models dynamic sequential data such as facial movements, where all the frames of a recording must be considered. The characteristics and facial movements of the user are embedded and stored in a server or in a device, and the identification of that user is verified if their facial appearance and movements coincide with the stored recording. Possible facial movements could include blinking, waving, smiling, or raising the eyebrows.
Since the data set is larger than that provided by facial ID alone, the expectation is that the efficiency of matching will improve. As such, it’s possible that a simple wink could be the key to safeguarding personal data and other valuable assets in the future.