Skip to content
MIT Better World

By Stephanie McPherson

The fingerprinting of criminal suspects, for example, has long been standard practice for police. An explosion of technology over the past decade has broadened the reach of biometrics, which now help with such mundane tasks as unlocking our phones and allowing us to breeze through expedited security at the airport.

The capacity of biometrics has expanded thanks to the advent of AI- and algorithm-driven systems, adding a new ethical layer to the contentious issue of government surveillance.

Michelle Spektor PhD ’23, the MIT-IBM Postdoctoral Fellow in Computing and Society in the MIT Stephen A. Schwarzman College of Computing’s Social and Ethical Responsibilities of Computing (SERC), studies lessons from the past that can and should inform the use of biometric technologies in today’s world. “I’m looking at the relationships between biometric identification and state power, politics of national belonging, inclusion, and exclusion in society, and thinking about how those dynamics change or stay the same over time, even as biometric technologies have been changing really significantly,” she says.

A painful history

Biometrics are useful because of biological data unique to each individual; many physical characteristics are secure identifiers. Before becoming an aid in police work, colonial powers used biometrics to identify and control the people of their territories with often racist classification systems. Troublingly, biometrics were also developed by eugenicists, who aimed to “improve” the human race, in their view, by favoring certain groups over others.

In more recent decades, biometric data have been digitized and used by governments to help with situations such as disaster relief. “For a state to function, it does need to know who its citizens are,” says Spektor. “Since their beginnings, nation states have used all different kinds of methods to know whom to deliver services to.”

Many countries—examples include Nigeria, Estonia, and India—have national biometric identification systems to help verify eligibility for services such as welfare and health care. Spektor studies the connected history of British and Israeli biometrics from 1904 up to the proposal of national systems in both countries in the early 2000s. The UK program was eventually canceled, due in part to historical associations with criminality and oppressive governments. Israel’s program remains in place despite opposition, and Spektor attributes its implementation to the longtime centrality of security, technology, and national identity in Israeli politics.

Not an infallible system

Biometric identification is by no means perfect. Spektor notes that, in the United States, biometric programs are constructed from datasets based on studies that focused traditionally on biomarkers of middle-aged white men.

“We know these systems don’t work as well for people who aren’t white men, for older people, and for people with disabilities,” continues Spektor. “The stakes of misidentification could be, depending on the context, really high, and inequalities that already exist could be exacerbated.”

Including AI in biometric identification can entrench these problems even further. One way past this, she observes, is to ensure humans are part of the process. Spektor urges governments considering new systems to consult with the communities that will be most impacted, and to explore whether or not a technological solution is even necessary. “Oftentimes the newest, most exciting technology is not going to be the thing that fixes problems—especially ones that are more structural and societal in nature,” she says. She is working on a book that will share her perspectives with a wider audience, and is a contributor to a think tank that consults with US government agencies on technology policy.

An ethical way forward

Within the SERC Scholars Program, Spektor leads a project on surveillance, working with students “to develop a toolkit they can use in their studies and future careers, whether they’re in engineering or tech policy or something else, so that they can promote the creation of more just and equitable technological systems that serve the public interest.”

“Our goal,” she continues, “is to foster an institutional culture in which we give just as much importance to accounting for the human impacts of the technologies we create, as we do to mastering the technical nuts and bolts of how to build these technologies.

Give now: Support work in the Social and Ethical Responsibilities of Computing