• Panelists (l-r) Duane Blackburn, S&T policy analyst for The MITRE Corporation; Ralph Rodriguez, Facebook research scientist; Logan O'Shaughnessy, attorney, Privacy and Civil Liberties Oversight Board; Arun Ross, professor, Michigan State University; and moderator Stephanie Schuckers, director of the Center for Identification Technology Research at Clarkson University, discuss misperceptions about facial recognition at FedID 2019. Photo: Shaun Waterman/Signal
     Panelists (l-r) Duane Blackburn, S&T policy analyst for The MITRE Corporation; Ralph Rodriguez, Facebook research scientist; Logan O'Shaughnessy, attorney, Privacy and Civil Liberties Oversight Board; Arun Ross, professor, Michigan State University; and moderator Stephanie Schuckers, director of the Center for Identification Technology Research at Clarkson University, discuss misperceptions about facial recognition at FedID 2019. Photo: Shaun Waterman/Signal

Misperceptions About Facial Recognition Taint Public Conversation

September 26, 2019
By Shaun Waterman


Experts: Bad science and click-driven media stoking public fears on facial recognition tech.


One or two inaccurate studies, amplified by a media focused on conflict, have stoked Americans’ concern about facial recognition, tainted the public conversation and led to flawed legislative proposals to ban the technology, experts told AFCEA International’s Federal Identity Forum and Expo Wednesday.

“We had a couple of academic papers come out that unfortunately were pretty wrong, to be blunt,” said Duane Blackburn, a science and technology policy analyst with The Mitre Corporation, and one of the conference organizers.

Blackburn said one of the papers had attributed to facial recognition criticisms about the shortcomings of a different technology, gender identification. Facial recognition essentially compares two images and attempts to judge whether or not they are of the same person; gender identification attempts to ascertain the gender of an individual from a single image.

“Both of those technologies use images of faces as inputs, both of them use artificial intelligence and machine learning ... to analyze those inputs, but they are different technologies,” said Blackburn, comparing it to “someone who had read somewhere that the oil filter on their motorized lawn mower was recalled and was therefore telling everyone else that ... minivans weren’t safe to drive.”

Blackburn declined to identify the paper he was referencing, but it appears to be a widely reported study by a researcher at the Massachusetts Institute of Technology (MIT) Media Lab.

Fellow panelist Arun Ross, a professor in the Department of Computer Science and Engineering at Michigan State University, shaded the matter slightly differently, blaming those who had written about the papers, rather than their authors.

“Those papers were reporting something,” he said—in the case of the MIT paper, the fact that gender recognition software appeared to work poorly on images of darker skinned individuals. But “the communicators of those papers, external to the authors, were using it to communicate something else” and allege that facial recognition was somehow biased against people of color.

“Bias is a very loaded term,” he pointed out, noting that, “In statistics, it has a very precise definition,” but that wasn’t the definition used in many of the articles written about the study. 

“There is a great danger in throwing around the word bias so loosely,” he said. “That is a great disservice, not just to the community but to the authors of those papers.”

He noted that the U.S. National Institute of Standards and Technology had compiled a huge database of results from their vendor testing program for facial recognition. Although the agency recently published its overall statistics—revealing that overall accuracy rates have leapt up during the past five or six years—it has yet to publish statistics on “demographic dependencies,” which will show the effect of age, sex and race on accuracy rates. That study is expected in the fall, officials said.

In the absence of such corrective data, inaccurate assertions about facial recognition were then amplified by national institutions poorly equipped to deal with complex technical issues, Blackburn said.

“Unfortunately, we currently have a national culture where the accuracy of something is less important than its potential utility,” he argued, adding that inaccurate studies could be used by “folks who have their own agendas. The press smells conflict and they want clicks,” so they report it.

“The House of Representatives did exactly what it was designed to do by quickly following the national pulse” and holding a pair of hearings, he said. Unfortunately, one of these featured witnesses “also didn’t understand the technology properly and provided inaccurate information.”

Refuting these inaccuracies is all but impossible, he added, because “the folks who really understand the technology, and who ought to be talking about it, are hamstrung” and either barred from public comment by controversy-shy employers or too scared by the prospect of entering the minefield that such a debate could easily become.

“The end result is we’re having national conversations [about facial recognition] that are in many ways completely disconnected from reality and that’s leading to bills being put forward to Congress to ban the technology.”

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Departments: 

Share Your Thoughts: