Facial Recognition

Labeled Faces in the Wild

UMASS PROFESSOR CALLS FOR FEDERAL OVERSIGHT OF FACE RECOGNITION SOFTWARE

It’s been in the news: public concern over the growing use of face recognition software, its flaws and biases, and increased use by private companies and government agencies in ways that might be unfair, or even detrimental, to certain populations. UMass Amherst College of Information and Computer Sciences (CICS) Professor Erik Learned-Miller, a pioneer in the development of face recognition software, is calling for federal oversight of the technology in order to promote its fair use.

His recommendations are outlined in a recently-released white paper, titled “Facial Recognition Technologies in the Wild: A Call for a Federal Office,” co-authored with Joy Buolamwini, Vicente Ordonez, and Jamie Morgenstern. Learned-Miller’s motivation for regulation comes from a deep understanding of the problems associated with face recognition software and its use in ways that were not originally intended.

In many ways, Learned-Miller is the right person to suss out face recognition regulation. Along with UMass Amherst alumnus Gary Huang ’12PhD and Facebook research scientist Tamara Berg, Learned-Miller was honored with the 2019 Mark Everingham Prize from the International Conference on Computer Vision. The three were recognized for their work on one of the most influential face datasets in the world, Labeled Faces in the Wild (LFW), considered the gold standard by which facial recognition algorithms are measured. It’s been used by companies such as Google and Facebook to test their facial recognition accuracy.

LFW moved the needle on face recognition accuracy for several reasons, says Learned-Miller, including the types of images used in face recognition datasets. “Most people were working with things like passport photos, where faces are very carefully aligned in the middle, looking in a particular direction with no expression. I jokingly call these ‘faces in a vise.’ We wanted to promote research on arbitrary face images and to establish better face recognition rules, so we created LFW,” says Learned-Miller.

Image
Erik Learned-Miller
Erik Learned-Miller

Though LFW was a big improvement over other datasets, it had its limitations. Racial and ethnic diversity of any data set is limited, as is the number of arbitrary images you can reasonably manage to collect and label.

“There is an illusion out there that if you had a magic database with enough images in it and enough different people, then we could use it to certify all the face recognition systems. That won’t work. No matter how many people you put in a database, there will always be subgroups that will not be well represented; for example Pacific Islanders, Native Americans over 85, children with autism or Down’s. Will you say your software just doesn’t have to recognize those people? That is not acceptable,” says Learned-Miller.

Stricter demands for data privacy are putting a kink in the ability to collect image data as well, says Learned-Miller. “General data protection regulations (GDPR) laws, like those found in Europe, are getting very strict. If your personal information appears in any database, you can demand it be removed for any reason,” he says.

Seeing no way around these problems, Learned-Miller started to think about the issue in a different way—one that was based on his early career as co-founder of a technology company that developed software and computer platforms for use by neurosurgeons in the operating room.

“Our company had to get those products approved by the FDA (Food and Drug Administration),” says Learned-Miller. “The approval process required thorough documentation and the creation of a scientifically valid way in which to test our devices for safety and efficacy, which the FDA either approved for marketing or rejected. That is the model. The more I thought about it, the more I thought, ‘That’s what we need.’ We can’t have a one-size-fits-all test for all possible uses.”

For some applications, such as social media filters that are used just for fun, the consequences of faulty recognition are low. “But if you give police body cams and they are identifying people as they are arresting them, the consequences can be really severe. You are not going to demand the same kinds of tests for both of these uses,” says Learned-Miller.

“The FDA model seemed like a natural fit. They have a whole center just for regulating medical devices and one for regulating pharmaceuticals. These offices operate somewhat differently, but they work on many of the same principles. The FDA has been working on this for 100 years and it is highly sophisticated. As much as people like to criticize the FDA, the model is effective and they do a great job,” says Learned-Miller.

Learned-Miller’s push to regulate face recognition is part of a larger effort by CICS to focus on computer science research for the common good, which envisions a world where computing enhances the well-being of its citizens. Research initiatives take into consideration concepts of equity, accountability, trust, and explainability (EQUATE).

“Now it is time to address many of the larger societal challenges that come with face recognition technology, including fairness, privacy, and intelligent guidelines for its use,” says Learned-Miller. “Many of us at UMass and elsewhere are working hard to address these problems.”

Read the white paper.