PHOTO: GRETCHEN ERTL
“On the one hand, there are people who think it’s going to be the deathblow to humanity,” says David Autor, the Daniel and Gail Rubinfeld Professor in the MIT Department of Economics. “On the other are those who think we’re about to hit the inflection point for the singularity. I don’t think it’s either of those things, but it’s some of both.”
Autor brings his expertise in AI to his role as faculty codirector of MIT’s new James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work. The new Stone Center will study the decline in labor market opportunities for noncollege workers in recent decades and the interplay between work, technologies, and wealth inequality. Autor’s focus: how to amplify the technology’s benefits while reducing its costs.
His interest in issues of technological disruption and labor markets goes back decades. Before pursuing academia in the early 1990s, Autor worked in San Francisco teaching computer skills to disadvantaged youth as a way to improve their economic prospects. “I took my first economics class when I was 30,” he says. His PhD at Harvard examined the relationship between technological change, inequality, and power, a foundation of his work on AI.
For much of the 20th century, new technologies meant automation, with machines replacing human labor, eroding skills, and depressing wages. AI, Autor argues, has the potential to be different. “We definitely have possibilities open to us that were not prior to this particular technological revolution,” he says. Even so, he maintains, it’s hard for us now to envision, given the way that transformative technologies create capabilities that were once unimaginable.
“We should be humble about predicting the future,” he says. “It’s very easy to see things that could be automated and replaced, but we’re less good at saying, here are the new things we will create.”
Throughout the past decades, computers have excelled in tasks that could be easily codified and proceduralized, making manufacturing and similar skilled jobs based on repetitive tasks vulnerable to replacement. But AI, particularly generative AI, works differently. “It’s not good at rules and procedures,” he says. “It’s much better at absorbing tacit knowledge, recognizing patterns and making inferences from unstructured information—the kinds of things that historically people could do and machines could not.”
In a recent paper coauthored with Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab, Author explored these dynamics across four decades of US labor, using data from the US Census and Department of Labor. Their conclusion is that AI both devalues and amplifies expertise.
Take Uber, for example. Decades ago, being a taxi driver required specialized knowledge of maps, routes, and traffic. Navigation apps erased that barrier, opening up the occupation to more people but depressing wages. “When automation eliminates expertise, it often reduces the scarcity of certain skills, making work accessible to more people but slowing wage growth,” Autor says.
By contrast, fields such as finance and laboratory science have seen AI take over supporting tasks while making the specialized knowledge of experts even more valuable. In those cases, Autor says, “wages tend to rise, but employment will not.” In both cases, AI is changing the landscape of the workforce in decidedly mixed ways, both benefiting and harming workers by reshaping who does the work and how.
SUPPORT ECONOMICS AT MIT
There are many ways to support the world-class MIT Department of Economics. Discover how.
From automation to collaboration
While much of the conversation around AI centers on replacement, with machines overtaking human skills, Autor suggests a shift in perspective, designing AI as a collaborative tool. “There are some tools that require you to bring specialized knowledge to make them effective,” he says. “You know, a stethoscope is good for a medical doctor, but not for most people.” AI could be used the same way, to enhance rather than replace human capabilities.
In a recent article for the Atlantic cowritten with James Manyika, a senior vice president at Google, Autor contrasts two aviation technologies. An autopilot can fly a plane independently, but it creates a dangerous situation when it fails and suddenly hands back control. By contrast, a supplemental navigation tool called a heads-up display enhances pilots’ capabilities with superhuman visibility into aircraft trajectory, windspeed, and other factors. “It just makes you a better pilot,” Autor says.
That model for AI transcends aviation. Autor cites an MIT study, for example, that found that when radiologists used AI to read X-rays, the quality of diagnoses declined because the system spit out conclusions without explanation, leaving doctors unclear on whether to trust it. A more effective design would integrate AI analysis into the diagnostic process, giving doctors new insights while preserving their judgment. “It requires a different type of design that allows for interrogation and interaction,” Autor says. “That’s totally attainable.”
The problem isn’t one of technology but philosophy. “Right now, AI is guided by this model of automation, where the goal is replicate, accelerate, overtake, and replace,” Autor says. “If that’s what you’re shooting for, you are going to design differently than if your goal is to make doctors better. It’s not really an engineering problem—it’s a design problem.”
Through the Stone Center, Autor is working with colleagues Daron Acemoglu, Institute Professor, and Simon Johnson PhD ‘89, the Ronald A. Kurtz Professor of Entrepreneurship at MIT Sloan, to foster research and debate among scholars on how AI can be built and deployed. “There’s this view in the world that technology does what it does and then people adjust to it, but it’s really so much more,” Autor says. “Economists shouldn’t just be saying, why did this happen or not happen. We can shape how things happen, engineering incentives and decision-making to get the results we want.”
Too often, people assume AI’s trajectory is outside of our control, Autor argues, while in fact artificial intelligence has a lot to do with human intention. “A lot of people are much more fatalistic about this than they realize. But AI is not deciding our future, we are,” he says. “Recognizing that agency is the first step to tackling this problem in a way that is more effective for everyone.”