When we think about facial recognition technology, many of us envision something out of the movies where a government official might try to identify the location of a criminal and does so within seconds. While there are law enforcement uses for such technologies, to understand the security implications of them, we first need to define a use case for it.
Anyone who’s traveled in recent years where they have to show a passport has likely encountered some form of facial recognition technology at a passport control kiosk. When a traveler steps up to the kiosk, it prompts them to place their passport on a scanner which then can compare the photo on the passport to that seen via the kiosk’s camera. It then compares both to images stored in a backend database. If the images match, then the traveler can proceed.
While humans can quickly determine if two images are likely the same person, computers have a much harder time with that task. In effect, the underlying technology performs an analysis to determine whether an image from one source has a high probability of matching an image from another source. When used in controlled environments, such as at a passport kiosk, the impact of variables like poor lighting or the angle of the camera can be reduced, but the result remains that of a probable match.
Any lack of precision can potentially get compensated for with a human-level review—depending on the goal of the system. For example, if a government wants to reject entry into a country for those on watch lists, a false negative occurs when the banned individual gains entry and a false positive takes place when an innocent person gets rejected. To limit the false negatives, officials could tune such as system to allow for an increased number of false positives, knowing that the secondary inspection in which a passport control officer performs will resolve the false positives.
Anyone attempting to exploit such a system would find success if someone not allowed in to the country has been accepted by the facial recognition system – i.e., force a false negative. Since the facial recognition software at its core compares images, it needs to have a known and trusted source of images for the individual. In the case of passports, there’s only one image to compare, but the underlying system still needs to know how to recognize faces. This happens by training the facial recognition system.
This training requires source images, taken in various settings with various emotions being expressed, and with the person’s face at a variety of angles. Given enough training test images, the system will eventually identify the individuals in the training images with a high degree of confidence. From there the concept of a “face” becomes part of the system and it’s possible to match two images against the concept of “face” and conclude if the two images are in reality the same person. In effect, we want to replicate the human concept of familiarity. Conceptually, think of the system as looking for the relative spacing between eyes, ears, nose and mouth, or using the unique shape of an ear, for instance, as a priority attribute.
So, what happens then when there’s a shift in fashion that has everyone wearing sunglasses, hats or face coverings? In controlled settings like we find at passport checkpoints, the authorities ask us to remove glasses and hats, but with the spread of COVID-19 prompting health officials to mandate facial coverings, the real impact of these decisions on facial recognition systems used in controlled environments has yet to fully play out.
Unfortunately, in much of the western world, facial coverings are often associated with anti-social behavior. For immigration authorities who have the luxury of defining acceptable attributes for passport images, facial coverings aren’t as big a concern, but for general purpose facial recognition systems businesses might use, it’s an important question that was investigated by NIST. The predictable outcome was that the greater the facial coverage, the poorer the match. However, with facial covering design now a fashion choice for many, addressing the topic of facial coverings within facial recognition technologies must get addressed. This extends beyond potentially reworking the training samples to ignore mouth and nose features and comes back to the use case.
Any business making use of or subscribing to services using facial recognition technologies should review their usage considering the recent ubiquity of facial coverings. As part of that review, they should understand the training data used and whether that training data represents a valid data set for their requirements. Monitor the quality of the match for signs that the facial recognition software struggles with its matching. Since some systems have ongoing retraining, businesses should look for signs the underlying model attempts to address large deviations in its matching processes. In effect, companies will want to know when the confidence decreases, and the model starts to “guess”.
As with any technology, businesses using facial recognition software need to remember that no software represents an ideal solution to a given problem. Ongoing reviews not only ensure the technology meets its objectives, but that it continues to do so in a secure manner in full compliance with appropriate regulations. These reviews can also serve as a process to course-correct an existing implementation in the face of changes in public perception and behaviors.
Tim Mackey, principal security strategist, Synopsys Cybersecurity Research Center (CyRC)
Original article source was posted here