Skip to content

Kate Kellogg PhD ’05

By Kara Baskin

At the MIT Sloan Health Systems Initiative (HSI), MIT Sloan School of Management professor Kate Kellogg PhD ’05 studies what happens once new technologies arrive in health care and collide with real patients, busy physicians, and high stakes.

These days, that work centers around AI.

“My research examines how people adapt to new technologies, especially AI, and how organizations can redesign work in ways that protect human expertise, autonomy, and well-being,” says Kellogg, the David J. McGrath jr (1959) Professor of Management and Innovation. “I’m an ethnographer. I go into the world people are living in and spend enough time there that I really understand where they’re coming from.”

HSI is a research and education initiative aimed at improving health care delivery and lowering costs by deploying MIT’s expertise in analytics, operations, and incentives. In this capacity, Kellogg embeds herself in large hospital systems such as Duke Health, Mass General Brigham, and NYU Langone Health, observing how interdisciplinary teams build and implement new AI systems.

Today, her focus is the widening chasm between AI’s promise and its uneven reality inside the US health care maze. A core theme of Kellogg’s work is the differences in the challenges and benefits of implementing predictive AI versus generative AI systems, differences she says are essential for understanding both opportunity and risk.

Where AI can help

Both types of AI systems can meaningfully improve diagnostics and personalize care. At Duke Health, developers worked with hospital doctors to develop a machine learning model to continuously analyze real‑time electronic health record data (such as vital signs, labs, and medical history) and identify patterns that predict which patients are at high risk of developing sepsis hours before it becomes clinically apparent. Without rapid detection, the infection can trigger full-body inflammation, and organs fail.

“The ER is very busy. Sepsis can easily get missed. If a patient has sepsis, they need to be treated immediately,” Kellogg says. A rapid diagnosis saves lives.

Generative AI, meanwhile, has the potential to extract structured insights from messy, mixed‑format data. Mass General Brigham, an integrated health system based in Boston that encompasses several area hospitals, has collaborated with researchers across 10 institutions to build a generative AI model to help cancer patients receiving immunotherapy. These treatments can be lifesaving, but they also cause severe side effects that are difficult for a busy doctor to detect because the information that identifies them is buried in a patient’s lengthy and unstructured clinical notes. AI can synthesize these clinical notes to detect the adverse events as soon as they appear in clinical notes, without waiting for episodic human review.

Where AI can falter

Yet despite advantages, Kellogg’s research shows that AI implementation often falters.

“Developers and organizational leaders often focus on improved quality, reduced costs, and increased revenue,” she explains. “What they sometimes forget are the frontline health care workers who have to implement these solutions.”

Kellogg focuses on strategies that make AI implementation more likely to succeed. One is protecting clinician autonomy. In the Duke sepsis-detection project, nurses used a predictive-AI tool to flag potential cases, but physicians resisted it, feeling overridden. The solution was procedural, not technical: Nurses worked to protect doctors’ autonomy in several ways. For example, when the AI tool flags sepsis, nurses conduct a chart review on the patient. This minimizes the interruptive alerts the ER doctor gets but allows the ER doctor to always make the diagnosis and place the orders, Kellogg explains.

Generative AI introduces different challenges, such as fluency.

“AI sounds so good, even when it’s wrong,” Kellogg says. “That makes it especially difficult for novices to detect when its output is incorrect.”

One strategy is shifting responsibility away from individual clinicians and onto health care systems. At one system at the forefront of GenAI implementation, developers address these hazards by screening potential projects using a risk score that evaluates technological risk associated with GenAI’s capability gaps related to the unpredictability of AI’s strengths and weaknesses—its so-called jagged frontier—along with operational, ROI, and regulatory risks. For example, clinicians in one department proposed using generative AI to translate patient consent forms into several languages. Developers determined that the technical risk was too high. While the generative AI available at the time was capable of translating from one language to another, it could also miss important linguistic nuances, creating legal liability.

Because clinicians often resist implementing AI systems when they can’t decipher where the output is coming from, Kellogg is especially interested in explainability. With Mass General Brigham’s generative AI system for cancer patients, clinicians don’t just receive an adverse event alert—they see exactly which parts of the medical record triggered it.

“The developer’s AI system provides clinicians not only with the answer—‘the patient is having an adverse event’—but also highlights all the parts of the notes that led the AI to come to that conclusion,” she explains.


SUPPORT PIONEERING HEALTH CARE SOLUTIONS AT MIT SLOAN

The MIT Sloan Annual Fund for Innovations in Health Care supports a broad range of health care-related faculty research and activities, helping translate rigorous scholarship into real-world impact across health systems globally. Make your gift.


What if AI gets too good?

But what happens if AI becomes too good at its job? Automation complacency is a concern. Could providers become too reliant on these AI systems? Not if Kellogg can help it.

“If AI gets so good, the clinicians using it could just begin accepting its outputs without really interrogating and validating them. There’s a lot of work going on now on things we can do to prevent automation complacency, such as asking the doctor to make an assessment before being given the AI recommendation,” she explains.

One of the many places MIT is tackling these issues is in the MIT Sloan course 15.311 Organizational Processes. Kellogg and her team have written new cases and developed teaching materials on implementing predictive AI and generative AI and reskilling workers for the future of work. Newly added class sessions focus on increasing the benefits of AI solutions for workers, protecting the autonomy of workers expected to use these solutions, building them to improve explainability, and upskilling workers to effectively use them.

“We want our students to be able to go out into the world and have a positive impact on the future of work,” she says.

The life-and-death world of health care makes this mission especially urgent, she says.

“There’s time pressure. There’s performance pressure. There’s a high degree of responsibility and accountability in these settings,” Kellogg says.

That reality is also what inspires her.

“I’m motivated by helping workers and organizations navigate technological change in ways that elevate worker voice, improve care and decision-making, and create innovation processes that actually succeed in messy, human contexts,” she says.