Enable breadcrumbs token at /includes/pageheader.html.twig

The Accuracy of Machines in Facial Recognition

Researchers studying human-versus-machine facial recognition abilities over the long term come up with interesting results.

The accuracy of machines relative to human performance in facial recognition has naturally increased with the computational abilities of machines and employment of advanced algorithms, compared to 10 years ago, according to Alice O'Toole, professor at the School of Behavioral and Brain Sciences at The University of Texas at Dallas (UT Dallas).

A decade ago, researchers at UT Dallas conducted experiments that showed machines “far surpassed” human performance when comparing two photographs of individuals. However, machines only achieved this level of performance with easy or moderately difficult pairs of photographs, O’Toole explained. Humans participating in the biometric study were actually better at comparing the hardest level of paired images.  

O’Toole shared the progress of the university’s research—in how accurate machines are relative to human performance in facial recognition—during the Federal Identity Virtual Collaboration event, known as FedID, on September 8.

In UT Dallas’ laboratory testing, humans—from ordinary students to experts trained in biometrics—considered two different photos. Pairings were rated on a scale of difficulty. Computers with artificial intelligence algorithms would first process one image in the pair, then process the second image and compare the representation of the two images that it produced. The experiments began with a 2010 effort with the National Institute of Science and Technology, or NIST, and the University of Notre Dame, which tested the algorithmic performance of computers using a data set of various types of images with differences in lighting, clothing, hair style and facial expressions.

“Even a decade ago with these images, humans and machines were really more or less equivalently matched when you got to the hardest pairs,” the professor stated.

Between 2012 and 2014, the combination of a new class of algorithms and advancements in neural networks changed the pattern-recognition abilities of machines, O’Toole noted. “They became available and were widely used, and pretty much changed the state of the art in many ways, for many problems in computer-based pattern recognition,” the professor observed.

But would these computational improvements automatically mean that computers could always prevail in facial recognition even in the hardest cases? she asked.

Employing such computer advancements for studies over the last five years, the researchers also used several groups of human participants, including various levels of forensic examiners who testify in court cases, experts trained in facial recognition or only in fingerprint identity, a group of “super recognizers,” who were untrained hobbyists, and a group of students.

The UT Dallas researchers found that the computers were now able to perform at the level of the best humans on the most difficult pairs.

“An algorithm developed in 2015 tested at the level of the students,” she said. “One from 2016 performed at the level of fingerprint examiners, and one from 2017 was really close to the base specialists, the super recognizers. The most recent algorithm we have available was definitely at the level of the best humans.”

However, O’Toole emphasized the combination of the best forensic scientists and the best machines still beats the performance of machines alone.

“The [results] were obviously interesting, but in my mind, the most interesting aspect of the study was when we actually fused or combined the judgments of our examiners here with the judgement of the best available algorithms.”

The professor also dispelled common myths about facial recognition, including the contentions: that face identification would be fair if machines were eliminated from the process; that facial recognition systems were fair before the advent of advanced algorithms; that race is categorical, and that it is known what those categories are; and that one face is as recognizable as any other.

“There is sometimes an assumption that face identification would be fair if we eliminated machines from the task,” she stated. “And this I can promise you as a psychologist we know for 50 years that this is not true. Humans show race bias in face recognition and this is a finding that has been replicated hundreds of times at this point. It is absolutely clear to psychologists that getting rid of the machines does not fix the race bias at this point.”

O’Toole also noted that every generation of facial recognition algorithm developed since 1991 shows bias. It is not just the recently developed powerful machines that are causing the problem.

“In biological terms, we all know that race is not categorical,” she continued. “There are many, many people out there of mixed-race heritage. One concern has to be that as face recognition algorithms get optimized to perform well on certain ‘race categories,’ that may have unintended consequences of causing the algorithm not to perform well, especially with people who don’t fit categories. That is always something we have to keep in mind.”