Inside the project to make AI explainable for all, and why it matters 

Most explainable AI is made by engineers, for engineers. But what about doctors, patients, policymakers, or everyday users? Upol Ehsan’s pioneering, human-centered work flips the script to make AI explainable for all.

by Milton Posner

This story is the second of a three-part feature on Upol Ehsan and his research. The others detail his work on algorithmic imprints and his efforts to facilitate the development of a national AI strategy in Bangladesh. 

The potential applications of AI in health care are numerous and undeniable. But what happens when, say, an AI assistant relays a cancer diagnosis, but its reasoning lies buried in numbers the doctor can’t untangle? 

Upol Ehsan, who’s worked closely with oncologists, has seen AI’s power outpace its clarity. Understanding is power in an AI-driven world, and when the AI’s explanations are incomprehensible to lay users, the human–AI divide deepens. 

Ehsan’s solution, which he has pioneered since 2020, is human-centered explainable AI (HCXAI), which strives to make AI decisions clear and accessible to expert and nonexpert users alike. Now an assistant professor and faculty lead for responsible AI governance and policy at Khoury College, Ehsan is on a mission to make AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. 

“HCXAI is a sociotechnical way of looking at the field that puts the human at the center,” Ehsan explains. “It’s a radical departure from the AI-first approach in that it aligns with what the human factor of explainability actually is. It democratizes explainable AI beyond experts and extends it to users.” 

This explainability hinges on actionability — having the information to hold an AI system accountable — and contestability, the ability to ask the system “why” and challenge its outputs. 

“You cannot have responsible AI without accountability, and you cannot have accountability without explainability,” Ehsan says. “If it’s a deep, feature-level, back-end explanation, for most lay users, that’s not actionable or contestable.” 

Ehsan’s work in this area began when explainable AI (XAI) as a field was reborn around 2017. He kicked off with a landmark paper that was the first to make an AI agent think aloud in plain English. The paper also introduced rationale generation, a foundational concept that allows large language models like ChatGPT to reason aloud and explain themselves.  

But whether humans would find the AI’s explanations plausible was still an open question. At the time, XAI was primarily algorithm-centered, and there were no methods to evaluate these systems from a human-centered perspective. 

This blind spot became a turning point for Ehsan’s work, and it shaped his award-winning 2020 work “Towards Human-centered Explainable AI: A Reflective Sociotechnical Approach,” which coined “human-centered XAI” and helped define the field. The paper bridged AI, human–computer interaction, and critical computing, shifting the conversation from “how to open the black box” to “who opens it.” 

“It was time we treated explainability for what it really is — a human factor, not just an algorithmic property,” Ehsan explains. “Who the human is matters. Riders of self-driving cars need different explanations than the engineers who built them.” 

Ehsan's HCXAI research has generated more than 25 peer-reviewed publications. Among them, his award-winning, oft-cited CHI paper “Expanding Explainability” introduced the idea of "social transparency," which challenged conventional wisdom by showing that AI systems can be made more understandable without changing the underlying model’s transparency. Another pioneering paper, “The Who in XAI,” was the first to examine how users with and without AI backgrounds interpret explanations, revealing that both groups overtrusted them, but for different reasons. Later studies discovered “explainability pitfalls” — AI explanations from well-intentioned designers that still produce negative effects — and offered an actionable framework for designers and policymakers to close the gap between AI’s technical design and real-world use. 

While most of the field chases seamless AI, Ehsan is leaning into its imperfections by introducing “seamful explainable AI.” Instead of making AI flawless and explaining why it does what it does, seamful XAI acknowledges that mistakes are inevitable, asks why the AI failed, and seeks to make its flaws useful. Ehsan likens it to Wi-Fi. 

“Think about where the dead zones are in your home,” he says. “Knowing where the signal drops helps you avoid those spots and make better decisions. AI is no different. It performs unevenly, and recognizing those weak spots can empower users.” 

Ehsan and his collaborators developed a three-step decision framework that guides users through identifying breakdowns, diagnosing their causes, and transforming those limitations into strengths. Their approach is already gaining traction; several Fortune 500 companies use it to stress-test their generative AI tools, and the National Institute of Standards and Technology has incorporated it into its globally adopted responsible AI framework. Ehsan’s research as a whole has informed the work of numerous health care, finance, and cybersecurity companies, and has shaped AI policies from leading global institutions including the United Nations. 

But creating knowledge is only half the battle; sustaining a new field requires building and nurturing a community. In 2021, Ehsan spearheaded the creation of the flagship HCXAI workshop at ACM CHI. Since its inception, he has served as lead organizer, helping to grow the workshop into one of the longest-running in the conference’s history. It has welcomed more than 400 researchers, practitioners, and policymakers from almost two dozen countries. 

Ehsan after speaking at CHI 2024 

“My scholarship is important to me, but my pride and joy is the vibrant HCXAI community,” Ehsan reflects. “I’m constantly inspired by researchers who have taken the mission further than I could have imagined. Just recently, I saw that a new PhD position in HCXAI had launched at KU Leuven in Belgium. It is deeply rewarding to see the field come into its own.” 

The CHI workshop’s fifth edition was held this year in Japan, with Khoury College Assistant Professor Saiph Savage joining Ehsan on the organizing team — cementing Northeastern’s role in shaping the future of human-centered AI. 

“I’m thrilled to bring HCXAI to its new home,” Ehsan says. “Northeastern has some of the world’s best minds in human-centered computing. I cannot wait to collaborate and keep pushing to make AI serve humanity, not the opposite.” 

The Khoury Network: Be in the know

Subscribe now to our monthly newsletter for the latest stories and achievements of our students and faculty

This field is for validation purposes and should be left unchanged.