A new NIST (National Institute of Standards and Technology) study of face recognition technology created after the Covid-19 pandemic started reveals significant progress in recognizing masked faces. The report, titled “Ongoing Face Recognition Vendor Test Part 6B: Face Recognition accuracy with Face Masks Using Post-COVID-19 Algorithms,” details the performance of a dozen new facial recognition algorithms.
The previous July report detailing the period before March 2020 showed that the software had trouble with masked faces. It seems that now the algorithms are doing much better. The new study outlines the performance of 65 new facial recognition algorithms, adding them to the previously tested ones. This makes 152 total algorithms with improved facial recognition capabilities.
How were the facial recognition algorithms created?
“Using the same set of 6.2 million images as it had previously, the team again tested the algorithms’ ability to perform “one-to-one” matching, in which a photo is compared with a different photo of the same person — a function commonly used to unlock a smartphone,” the report notes.
It should be noted that the images used in the analysis had mask shapes applied digitally instead of people wearing actual masks.
So, what did the report find out in terms of facial recognition algorithm reliability?
- When both the new image and the stored image are of masked faces, error rates run higher.
- The more of a face a mask covers, the higher the algorithm’s error rate tends to be.
- Mask colors affect the error rate.
- A few algorithms perform well with any combination of masked or unmasked faces.
Another significant conclusion based on the analysis shows that “individual algorithms differ.” Users of the algorithms should be well-acquainted with how their software performs in their specific situations. It is also better to use real physical masks rather than digital simulations.
Facial recognition can be bypassed
In August, security experts published findings on how modern facial recognition technologies can be fooled by malicious users using a discovered weakness in the machine learning algorithms.
One of the discovered methods relied on the use of special software designed to generate photorealistic faces. This attack model relies on several frameworks for creating such images.