Skip to content
MIT Better World

By John Pavlus

Others, unfortunately, are harder to see. MIT Technology Review recently reported on implicit sexism in some of the vast word sets engineers use to “train” artificial-intelligence applications—which can lead to objectionable associations in those applications, such as being more likely to connect the word “woman” with “homemaker” than “programmer.”

D. Fox Harrell, a tenured associate professor of digital media dually appointed in MIT’s Comparative Media Studies Program and Computer Science and Artificial Intelligence Laboratory (CSAIL), detects such invisibly coded-in bias across a range of computational systems, using tools and practices from machine learning, cognitive science, and social criticism. As director of MIT’s Imagination, Computation, and Expression Laboratory (ICE Lab), he also invents new types of digital media to help us do better—not only by avoiding negative bias, but by supporting more powerful ways to represent ourselves in computational systems. “I’m interested in figuring out how we can best capture the kind of nuances that are most empowering to users,” explains Harrell, author of Phantasmal Media: An Approach to Imagination, Computation, and Expression (MIT Press, 2013).

Using frameworks from the Advanced Identity Representation (AIR) Project—an initiative the ICE Lab launched six years ago to develop virtual identity technologies—Harrell has “empirically demonstrated gender and racial discrimination within certain hit mainstream video games,” he says. By identifying and quantifying such biases, he hopes to make it easier for developers—himself included—to design games that avoid them, and to produce games that encourage and support social critique. One AIR creation, Chimeria, is a platform implementing “a model of identity that does not just place people in demographic boxes.” The system treats the elements of a user’s identity (such as visible information in a social-media-style profile, or invisible attributes like a map of what genres of music the user prefers) as dynamically evolving, rather than fixed—which is much closer to how people cognitively categorize in real life.

Most recently, Harrell is collaborating with photojournalist Karim Ben Khelifa on “The Enemy,” a virtual- and augmented-reality experience which invites users to empathize with characters on either side of major global conflicts (including the Israeli-Palestinian conflict, warfare in east Congo, and gang conflicts in El Salvador). Using Chimeria’s approach, Harrell and Khelifa are expanding the expressivity of “The Enemy,” enabling the system to sense and respond to users’ body language. Harrell is also conducting NSF-supported research on how avatars can help students’ ability to learn computer science. “We run public school workshops supporting students from groups currently underrepresented in STEM,” he says. “We learned that if people use abstract avatars to represent themselves [while learning], like a dot or geometrical shape, they often do better. We’ve shown that dynamic avatars we call ‘positive likenesses’ are even more effective—digital representations that look like you when you’re doing well, but then appear as an abstract shape when you’re not.”

If binary 1s and 0s seem fundamentally ill-equipped to contend with the fuzzy borders of identity, Harrell suggests taking a wider view. “In the 17th century, you could have just as easily asked how one can ever hope to capture the nuances of human experience with ‘just oil paint,’” he says. “All technical systems—like computers—are also cultural systems. It’s just that some of these systems are very explicit about the values they embody, while in other systems, the values are more implicit. A lot of my work is about trying to extract and make clear what some of those ‘hidden’ values are.”