Khoury News
Khoury professor earns Harvard Berkman Klein fellowship to research harms of dead AI systems
Upol Ehsan’s pioneering research discovered that ending an AI system does not end its harms. His algorithmic imprint work is now evolving—backed by a prestigious fellowship—to reshape how we define AI safety.

This story is the first of a three-part feature on Upol Ehsan and his research. The others detail his work on human-centered explainable AI and his efforts to facilitate the development of a national AI strategy in Bangladesh.
Anyone who’s ever contested an exam grade knows how frustrating it can be when a teacher won’t admit a mistake. But when the grader is an algorithm that can’t speak at all, it’s another matter entirely.
Such was the case in 2020 when the globally administered General Certificate of Education A-level exam, an entry qualification at many universities, was graded by an algorithm. As the algorithm’s deeply flawed scoring and the subsequent regrading drew criticism and protests from students in the UK, Upol Ehsan, an assistant professor at Khoury College and the faculty lead for the college’s initiative on responsible AI governance and policy, was talking to his former tutoring students about their experiences.
He had taken the GCE exams himself. He understood their inner workings. And two questions kept looming in his mind.
Why were students in the UK receiving nearly all of the media attention, even though the exams were administered all over the world, especially in the Global South? More importantly, why were students still furious after their scores had been revised (supposedly) without the algorithmic effect?
These questions prompted Dr. Ehsan to embark on a one-and-a-half-year journey of investigative journalism and rigorous research. After hundreds of informal conversations and interviews with students, teachers, and caregivers in Bangladesh, he announced the discovery of the “algorithmic imprint” at ACM FAccT in 2022.
Now Ehsan is taking that work to the next phase as part of a highly selective fellowship at Harvard University’s Berkman-Klein Center for Internet & Society (BKC), which strives to understand and tackle challenges at the nexus of computing and the social sciences.
Simply put, algorithmic imprint is the hangover that ensues when an algorithm is discontinued, but its harms remain unaddressed. Ehsan likens the algorithm to a cookie dough recipe; if the oven fails and you can’t bake the dough, you don’t get your flour and sugar back. The recipe has transformed the ingredients just as an algorithm reshapes its inputs.
“We often think that software being editable and reversible means that its impact doesn’t last beyond its lifetime, that if you use the kill switch, all harms are done and sins are forgiven. The imprint challenges how we assess algorithmic impacts by showing an entire afterlife that we weren’t seeing,” Ehsan explains. “Harms are hard to identify and measure during an algorithm’s lifetime, and after the algorithm is gone, it’s even harder. The imprint helps us to find those harms.”

It also invites researchers to examine algorithmic harms beyond the usual monetary and property consequences.
“When you have an undignified, unjust experience with an algorithm, your lived experience matters when assessing AI harms, and that’s what we use to understand and evaluate harms,” Ehsan says.
And that means understanding harms to everyone, not just, for instance, harms to UK students whose admissions chances at elite universities were compromised by the GCE algorithm. It means understanding the impact of algorithms on countries outside of Europe, including Ehsan’s native Bangladesh.
“I was upset at the one-sided narrative and intent on making sure the Bangladeshi narrative wasn’t erased,” Ehsan recalls. “When these algorithms are made in the Global North without any input from the Global South where they’re deployed, we want to understand how they affect the people they’re deployed on.”

Everyday examples abound too. Ehsan notes the case of Stable Diffusion, a text-to-image AI model with millions of users.
“It was trained on the LAION dataset, and recently we found that the LAION dataset had instances of child pornography in it,” Ehsan says, noting that while the typical response would be to remove the data set, “Its data still lives in Stable Diffusion, which is still running. And there’s no way to surgically remove those bad data samples.”
Ehsan has centered his BKC fellowship work on a handful of key questions: How can we view algorithmic afterlife cycles — and develop a taxonomy of their harms — like we do for up-and-running algorithms? How can we extend algorithmic impact assessments to account for these afterlives? Can we design imprint-aware, degradable algorithms instead of building them to live forever?
“The work is very conceptual and there’s not a lot of prior work to build on, so you’re building the plane as you fly it,” Ehsan says, noting the BKC’s unusual and ideal funding of open-ended, self-guided research. “These questions can’t be answered only by computer scientists; we need lawyers, policymakers, and civil society to help address these problems. BKC’s ‘cohesive diversity’ is a perfect ecosystem for this kind of work.”
And the work won’t end there. Ehsan is bringing it to Northeastern as well, hoping to leverage connections between Khoury College, the Institute for Experiential AI, and the College of Social Sciences and Humanities to address interdisciplinary challenges.
Ehsan’s work on the algorithmic imprint has already been impactful nationally and internationally. It has created a new category of post-decommissioning, amended the National Institute of Standards and Technology’s globally adopted responsible AI framework, and informed AI policymaking — including algorithmic reparations — at international governing bodies like the United Nations.
“The imprint, fundamentally, is an accountability tracker; it doesn’t let AI deployers off the hook just because they decommissioned the AI system,” he says. “An imprint-aware approach will create AI systems to be more sociotechnically degradable, like biodegradable items.”
The Khoury Network: Be in the know
Subscribe now to our monthly newsletter for the latest stories and achievements of our students and faculty