Khoury News
CHI 2025: Khoury researchers publish record-high papers for second straight year
Article teaser


The world’s most prestigious human–computer interaction is taking place in Japan this month, and a record number of Khoury College researchers — along with their collaborators in the College of Arts, Media and Design; the Bouvé College of Health Sciences; and the College of Science — are ready to go.
Three Khoury College papers were further honored with Best Paper honorable mentions, placing their work in the top five percent of accepted research. One explores how respecting marginalized peoples’ autonomy makes for better computer science and design research. One investigates the unique challenges Black femmes face as online content creators. And the third charts the history – and conspiratorial implications – of the octopus map.
To discover their work, which touches on everything from realistic LLM personas to GenAI accessibility to advancements in machine knitting technology, click on one of the linked summaries below, or simply read on. For a schedule of Khoury researchers’ presentations and other activities, visit our CHI 2025 page.
- Aging into the future
- What’s going on in that AI?
- This chatbot’s sorry to hear about that
- Animals in a mechanical world
- Visualizing models and modeling bread pt. 1
- Visualizing models and modeling bread pt. 2
- AI choices from the heart
- Mother Goose and the LLM
- Playtime for everyone
- Did ChatGPT earn that A?
- AI has a quick question
- Putting trust in medical chatbots
- From escape rooms to XR
- Enchanting interactions
- Tantrums and time-outs in cyberspace
- Tricked into clicks
- Picture playtime for everyone
- Disability-centered garment design
- Your other teacher is on the screen
- But what does this graph say in Arabic?
- Let us study ourselves
- Advocacy, empathy, and an LLM
- Can AI dream of electric sheep?
- For the love of robots
- A quick little kindness
- Preventing oversharing with ChatGPT
- Counselor in e-training
- Getting healthier, together
- AI coding for developers who are visually impaired
- Cephalopods on the map
- AI-powered surrogates for real users
- Black femme creators’ invisible internet
Workshop: Technology Mediated Caregiving for Older Adults Aging in Place
Aging into the future
Elizabeth D. Mynatt, Masatomo Kobayashi, Alisha Pradhan, Niharika Mathur, John Vines, Katie Seaborn, Erin Buehler, Jenny Waycott, John Rudnik, Tamara Zubatiy, Agata Rozga
A growing number of technologies are available to help older people age comfortably, but older adults’ expectations and concerns about technology can differ from the concerns of their younger peers. How can technology support people as they age in a way that works for them and their caregivers?
This workshop will bring together researchers, students, and practitioners working at the intersection of technology and aging to discuss technological interventions for older adults. What challenges do aging individuals and their caregivers face? What cultural norms do older people have about technology? How can systems like AI – in concert with internet of things technologies, robotics, and collaboration tools – support , empower, and scaffold caregiving in old age?
“This workshop is a valuable discussion to assess progress in the field, and to especially understand how the role of technology supports for aging in place varies culturally across the world,” said Khoury Dean Elizabeth Mynatt, the workshop’s organizer.
Workshop: New Frontiers of Human-Centered Explainable AI (HCXAI)
What’s going on in that AI?
Upol Ehsan, Elizabeth A. Watkins, Philipp Wintersberger, Carina Manger, Nina Hubig, Saiph Savage, Justin B. Weisz, Andreas Riener
AI can do some amazing things, but far too often, we have no idea why the AI did what it did. Without "the why", we can't hold the AI accountable when things go wrong. And that's a problem. Enter Human-centered Explainable AI (HCXAI), a field where the goal is to make AI understandable to everyone, not just the software engineers.
Now in its fifth year, the HCXAI workshop has become both the flagship forum for Human-Centered XAI and one of CHI’s longest-running workshop series. This year, the focus is on participatory civic AI, LLM hallucinations, responsible AI, and Global South challenges in AI explainability. Since 2021, over 400 experts across 19 countries from 9 sectors have joined the workshop to push the state of the art around AI explicability. This year’s cohort will build on that foundation.
Khoury Assistant Professor and lead organizer Upol Ehsan said this year’s goal is “to strengthen this global XAI community trying to build a future where everyone can experience AI with dignity". Khoury Assistant Professor and co-organizer Saiph Savage added that "Civic AI is a timely focus area because it's about ensuring AI serves humanity, not the other way around."
AI on My Shoulder: Supporting Emotional Labor in Front-Office Roles with an LLM-based Empathetic Coworker
This chatbot’s sorry to hear about that
Vedant Das Swain, Qiuyue "Joy" Zhong, Jash Rajesh Parekh, Yechan Jeon, Roy Zimmerman, Mary P Czerwinski, Jina Suh, Varun Mishra (+Bouve), Koustuv Saha, Javier Hernandez
Helpline workers, health care workers, hotel front desk staff, and other client-service representatives (CSRs) help all of us, but when we’re confused, frustrated, or stressed, not all of us are very nice. Resolving hundreds of client complaints can be draining, and even traumatic when those clients are aggressive. Some have suggested AI replace CSRs, but could AI help them instead?
These researchers designed and tested an LLM-powered assistant called Care-Pilot, which generates supportive, actionable messages to help CSRs reframe and regulate their emotions while working with a difficult client. Some CSRs even found Care-Pilot more sincerely empathetic and action-oriented than venting with a human coworker.
“We need to expand our view of the role these AI agents can play to ensure human performance is healthy and sustainable,” said Khoury Distinguished Research Fellow Vedant Das Swain.
Animals' Entanglement with Technology: A Scoping Review
Animals in a mechanical world
Rébecca Kleinberger (+CAMD), Lena Ashooh, Keavan Farsad, Ilyena Hirskyj-Douglas
The world is becoming ever more technical, and not just for human beings. From parrots making video calls to wildlife encountering farming systems, animals are also navigating a world increasingly filled with technology. How are animals using technology and how can we design better systems for these creatures?
These researchers with the INTERACT Animal Lab reviewed nearly 800 works, seeking to track the trends, gaps, and ethical considerations in play regarding animals’ use of tech. After discovering that most systems treat animals as subjects, rather than users, they suggested using feedback, empirical testing, and projected animal benefits to improve animal–technology interactions.
“Studying how different species engage with technology challenges our assumptions about interaction design and can lead to more inclusive technologies for all users—human and non-human alike,” said Khoury and CAMD Assistant Professor Rébecca Kleinberger.
Bridging Modeling and Domain Expertise Through Visualization: A Case Study on Bread-Making with Bayesian Networks
Visualizing models and modeling bread pt. 1
Omi Johnson, Melanie Munch, Kamal Kansou, Cedric Baudrit, Anastasia Bezerianos, Nadia Boukhelifa
How does machine learning work? Most people don’t know. But scientists within the EVAGRAIN project — which uses Bayesian Network models of the bread-making process to examine climate resilience in the French wheat industry — need the help of bread data experts to check their work. So, how do you teach an old domain expert new computer science?
This research team developed a visualization platform to help domain experts explore and critically analyze Bayesian networks. Using an interactive chart of edges, ring-shaped nodes, and dynamic propagation animations, their platform showed potential for helping non-computer scientists understand Bayesian models. It also taught the researchers valuable lessons in how to use visualization to make complex models accessible to experts across industries, improving the models themselves while also fostering more trust and use in the tools scientists create.
“In an era where more and more of our decisions are guided by machine learning and AI, we should be trying our utmost to keep people informed on how these models work,” said Khoury undergraduate Omi Johnson.
Explaining Complex ML Models to Domain Experts Using LLM & Visualization: An Exploration in the French Breadmaking Industry
Visualizing models and modelling bread pt. 2
Briggs Twitchell, George Katsirelos, Anastasia Bezerianos, Nadia Boukhelifa
Remember how most expert bakers don’t understand Bayesian networks? In their case study, these researchers tried a different tactic to bring them up to speed.
Using an LLM-powered chatbot, this research team attempted to explain Bayesian networks — which describe how different elements in a system relate to or cause one another — to an expert baker already familiar with how things in the breadmaking industry are interrelated. Domain experts (like bakers) are crucial for checking whether models are correct, but they can only do that when they understand how the models work in the first place. The researchers found their approach worked well for some things, but there were still a lot of problems with using current LLMs to explain complex computer science models.
“Our goal is to make machine learning insights accessible to everyone, not just statisticians. Clear explanations empower users to confidently make decisions informed by ML,” said Briggs Twitchell, who earned his master’s in computer science from Khoury College in 2024.
CardioAI: A Multimodal AI-based System to Support Symptom Monitoring and Risk Prediction of Cancer Treatment-Induced Cardiotoxicity
AI choices from the heart
Siyi Wu, Weidan Cao, Shihan Fu, Bingsheng Yao, Ziqi Yang, Changchang Yin, Varun Mishra (+Bouve), Daniel Addison, Ping Zhang, Dakuo Wang (+CAMD)
When you’re already sick from cancer and chemotherapy, it’s tough to tell life-threatening heart damage from any other weird new pain. Add in the fact that symptoms often occur when already overworked clinicians aren’t around, and it becomes extremely difficult to decide when to evaluate for treatment-induced cardiotoxicity.
This research team proposed CardioAI, a tool that uses the cancer patient’s wearable device data and the clinician’s own words to help evaluate the patient’s risk of cardiotoxicity. This study brought 11 clinicians into the design process for CardioAI to discuss how they evaluate cardiotoxicity risk without AI and how to design a tool that would supplement their decisions, rather than decide for them.
Characterizing LLM-Empowered Personalized Story Reading and Interaction for Children: Insights From Multi-Stakeholders' Perspective
Mother Goose and the LLM
Jiaju Chen, Minglong Tang, Yuxuan Lu, Bingsheng Yao, Elissa Fan, Xiaojuan Ma, Ying Xu, Dakuo Wang (+CAMD), Yuling Sun, Liang He
Anyone who’s ever tried reading to a child knows that the actual reading is only part of what’s going on. Kids want to talk about the stories, guide them forward and backwards, slow them down, speed them up, imagine new endings, and interact with them in every other way they can imagine. Can a large language model (LLM) read a child a story in the personal, interactive way they love so much?
These researchers designed and developed StoryMate, which aims to do just that. Then, they gathered a group of children, parents, and education experts to see how it worked. While the participants enjoyed how personalized StoryMate was, they also suggested valuable design tweaks like guiding mechanisms and interactive interfaces to improve similar tools in the future. Readers can also try out StoryMate for themselves.
“Given the increasing role of AI in education and family interactions, understanding how LLM-driven tools can support parent-child reading is crucial for designing effective and ethical digital learning experiences,” said Khoury-affiliated research assistant Jiaju Chen.
Cultivating Computational Thinking and Social Play among Neurodiverse Preschoolers in Inclusive Classrooms
Playtime for everyone
Maitraye Das (+CAMD), Megan Tran, Amanda Chi-han Ong, Julie A. Kientz, Heather Feldnerx
Children in the 21st century need to grow up prepared to think about computers. How do you make sure that happens when not all children think in the same way?
Based on interviews with teachers about how to support neurodiverse preschoolers, this research team designed an age-appropriate, programmable robot called KIBO, deployed it in preschool classrooms with neurotypical and neurodivergent children, and analyzed their interactions. The team found neurodivergent preschoolers (e.g. those with autism or ADHD) enjoyed learning computational thinking using KIBO, and played with it alongside neurotypical peers and adults.
“This research provides evidence-based strategies for inclusive computational learning that can shape classroom practices and technology design, and ultimately improve educational outcomes and social development for all children,” said Khoury Assistant Professor Maitraye Das.
Examining Student and Teacher Perspectives on Undisclosed Use of Generative AI in Academic Work
Did ChatGPT earn that A?
Rudaiba Adnin, Atharva Pandkar, Bingsheng Yao, Dakuo Wang (+CAMD), Maitraye Das (+CAMD)
If ChatGPT wrote an essay with your name on it, did you cheat? College students and teachers often disagree, and academic policies are struggling to keep up.
This research team surveyed and interviewed over 100 students and teachers who had experienced generative AI use in the classroom to find out when and why they use the tools, and how they choose whether to be upfront about it. They found that students using generative AI often don’t disclose their use, even when they believe it should be disclosed. They also looked at ways teachers can start more open conversations and encourage transparency around the use of AI.
“The use and misuse of generative AI isn't just a technological issue; it’s a cultural and ethical one,” said Khoury Distinguished Fellow Rudaiba Adnin. “If institutions fail to understand how and why students use generative AI, they risk falling behind in creating fair, transparent, and effective educational environments.”
Feasibility and Utility of Multimodal Micro Ecological Momentary Assessment on a Smartwatch
AI has a quick question
Ha Le, Veronika Potter, Rithika Lakshminarayanan, Varun Mishra (+Bouve), Stephen Intille (+Bouve)
AI models need accurate, relevant, and labeled training data. But is there a way to ensure that the model asks for data in a way that doesn’t irritate its human users?
This research team tested out whether participants would answer a single question — whether they were sitting, standing, or moving around — with a verbal answer or a tap on a smartwatch screen, if the AI asked it every five minutes for a week. They found that providing different ways to answer the question encouraged participants to respond to almost three quarters of the prompts.
“To build computers that can respond naturally to our everyday experiences, we need new ways to gather labeled training data about those experiences as people go about their lives,” said Khoury PhD student Ha Le. “If successful, we think the methods will be useful as we try to design computers that can naturally react to the activities we do every day.”
Framing Health Information: The Impact of Search Methods and Source Types on User Trust and Satisfaction in the Age of LLMs
Putting trust in medical chatbots
Hye Sun Yun, Timothy Bickmore
When we’re injured or sick, our first stop often isn’t the doctor’s office; it’s the internet. But not all online health information is true, and we aren’t always discerning, especially when the information is delivered in a way we trust.
These researchers compared how trusting and satisfied users were when the same health information was delivered by a search engine, a normal chatbot, or a Chatbot+ that could retrieve information from outside sources. A majority of users were most trusting and satisfied with either the normal chatbot or Chatbot+, although a few preferred search engines for their broader range of information. People also tended to be satisfied with information from social media posts and trust it as much as information from reputable health websites, until they were put side-by-side.
“While much of the existing research has focused on the accuracy and content of chatbot responses, we wanted to explore a different angle — how does the design of search interactions themselves shape users' perceptions of information quality?” said Khoury PhD student Hye Sun Yun.
From Locked Rooms to Open Minds: Escape Room Best Practices to Enhance Reflection in Extended Reality Learning Environments
From escape rooms to XR
Erica Kleinman, Rana Jahani, Eileen McGivney, Seth Cooper, Casper Harteveld (+CAMD)
People learn best when they’re prompted to reflect on what they’re learning. But popping up a prompt in the middle of an educational extended reality (XR) session makes it feel a lot less real. Luckily, there’s already an industry that’s mastered the art of giving clues and feedback without breaking immersion: escape rooms.
This research team interviewed 13 escape room game masters on how they infuse reflective hints and prompts into complex, immersive problem-solving environments. They identified best practices — like the importance of adaptable interventions that can change based on what learners need — and used iterative open coding to practice giving subtle nudges.
“The results of this work can provide valuable insights to designers and developers of XR learning to create more effective and enjoyable learning experiences,” said postdoctoral researcher Erica Kleinman. “And as escape room enthusiasts ourselves, the research team also really enjoyed learning about how game masters do their jobs.”
GenieWizard: Multimodal App Feature Discovery with Large Language Models
Enchanting interactions
Jackie (Junrui) Yang, Yingtian Shi, Chris Gu, Zhang Zheng, Anisha Jain, Tianshi Li, Monica Lam, James A. Landay
Multimodal systems — systems that you can interact with using methods like voice, gestures, and typing — are more flexible, efficient, and adaptable than systems that can only be used in a single way. Unfortunately, that flexibility also makes them harder to develop and design. If only some kind of wizard or genie could predict all the different ways that people will try to interact with a new tool!
This research team developed GenieWizard, an LLM-powered tool to imagine potential user interactions and suggest the features to support those interactions. The team let twelve developers test GenieWizard, and found that those who used the tool could identify and implement about four times as many of the missing interaction features as those who didn’t use the tool.
In Suspense About Suspensions? The Relative Effectiveness of Suspension Durations on a Popular Social Platform
Tantrums and time-outs in cyberspace
Jeffrey Gleason, Alex Leavitt, Bridget Daly
When people break online platforms’ community standards, they often get temporarily suspended — the same logic as putting a toddler in time-out. But after receiving a suspension, do the suspended users change their behavior?
This research team analyzed two groups of several thousand people who had earned suspensions of varying lengths on Roblox, and whether their behavior changed after receiving a suspension. They found that longer suspensions were more effective than shorter ones, particularly for first-time offenders. However, they also found that the effect wanes over time.
“Behavioral consequences are a critical tool that digital platforms use to respond to community standard violations and moderate their online spaces. However, there is limited causal evidence about the effectiveness of digital consequences,” said Khoury PhD student Jeffrey Gleason. “We use large, field experiments on Roblox to shed light on these questions.”
Inaccessible and Deceptive: Examining Experiences of Deceptive Design with People Who Use Visual Accessibility Technology
Tricked into clicks
Aaleyah Lewis, Jesse J Martinez, Maitraye Das, James Fogarty
Navigating certain websites without accidentally buying something you don’t want, signing up
for something you don’t need, or clicking on a suspicious link can feel like picking your way through a minefield, thanks to a practice called “deceptive design.” For visually impaired people who rely on technologies like screen readers, avoiding deceptive design is even tougher.
This research team followed 16 people who use visual accessibility technology as they interacted with deceptive design. They found that websites designed without accessibility in mind could present a second set of inadvertent deceptions on top of those designs intended to deceive. They also proposed tools and feedback that could help well-intentioned designers address deceptive patterns in their work.
“Our study exposes how deceptive patterns disproportionately affect millions of people who rely on visual accessibility technology, and create additional costs that can exclude them from fully participating in digital spaces,” said Khoury Assistant Professor Maitraye Das.
Incloodle-Classroom: Technology for Inclusive Joint Media Engagement in a Neurodiverse Kindergarten Classroom
Picture playtime for everyone
Kiley Sobel, Maitraye Das, Sara M Behbakht, Julie A. Kientz
It’s important disabled and nondisabled children can play and learn together, so that both groups can grow up into compassionate and understanding adults. What kinds of technology can help them do that?
These researchers developed the tablet app Incloodle, which prompts children to share stories and take pictures together. Tested in a kindergarten classroom of neurodivergent and neurotypical children, Incloodle facilitated play between the two groups, and gathered valuable information on how children with different neurotypes interact. This helped the researchers to better understand how accessible technology can make engaging with media together inclusive, positive, and fair.
“Our research demonstrates how thoughtfully designed technology can transform classroom dynamics to promote genuine inclusion and equitable participation, and provide valuable social–emotional learning opportunities that benefit both neurodiverse and neurotypical children,” said Khoury Assistant Professor Maitraye Das.
KnitA11y: Fabricating Accessible Designs with Machine Knitting
Disability-Centered Garment Design
Tongyan Wang, Hanwen Zhao, Yusuf Shahpurwala, Megan Hofmann, Jennifer Mankoff
Pretty much everyone knows the frustration of one-size-fits all fashion, and for people with disabilities this can make finding garments that fit their unique needs extremely challenging. Fitting garments around assistive devices or accommodating sensory needs is not supported with mass-manufactured clothing.
This research team introduced KnitA11y, a digital machine knitting pipeline where users can add accessibility features to knit garments and automatically knit them on industrial knitting machines. The interactive design interface lets knitters visualize their patterns to better customize their changes, and supports modifications such as holes, pockets, straps, and handles. The team has successfully used KnitA11y to make a sensory-friendly scarf with a pocket, a hat with a hole for assistive devices, a sock with a pull handle, and a mitten with a pocket for heating pads to alleviate Raynaud’s symptoms.
Live-Streaming-Based Dual-Teacher Classes for Equitable Education: Insights and Challenges From Local Teachers' Perspective in Disadvantaged Areas
Your other teacher is on the screen
Yuling Sun, Jiaju Chen, Xiaomu Zhou, Xiaojuan Ma, Bingsheng Yao, Kai Zhang, Liang He, Dakuo Wang
For a long time, where you live determined what kind of an education you could get. In recent years, though, some schools have experimented with class structures where a remote subject matter expert teacher and a local teacher in the classroom work together to deliver lessons. So, does it work?
This research team examined live-streaming-based dual-teacher (LSDC) classes in disadvantaged regions of China and found that implementing the high-quality resources they offered came with practical challenges. When the teacher in the room put the remote teacher’s lectures were put into a local context, the students tended to understand the lessons best. The researchers encouraged schools considering LSDC classes to recognize and support the crucial contributions of their local teachers, ensuring that the classes are equitable, sustainable, and effective.
Lost in Translation: How Does Bilingualism Shape Reader Preferences for Annotated Charts?
But what does this graph say in Arabic?
Anjana Arunkumar, Lace M. Padilla, Chris Bryan
A good graphic can fly around the world in the blink of an eye, but the assumptions baked into its design don’t move so easily. Do people fluent in multiple languages interpret and trust the same charts in the same way?
These researchers had more than 1000 bilingual people review charts with different levels of annotation, both in English and in either Arabic or Tamil. They found more annotations created better understanding, but that participants reading in Arabic or Tamil tended to prefer minimal or more narrative annotations, even if they preferred more thorough annotations when reading in English.
“We often think of translation as a checkbox for accessibility, but language shapes how we process, trust, and even emotionally respond to information. As someone who grew up bilingual, I’ve felt this firsthand,” said Khoury postdoctoral research associate Anjana Arunkumar. “This work is about making sure data visualizations meet people where they are, not just where they’re expected to be.”
Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge
Let us study ourselves
Leah Ajmani, Talia Bhatt, Michael Ann DeVito
The idea is simple: living through something makes you an authority on it, and respecting that epistemic autonomy benefits computer science and design.
These researchers describe the importance and benefits of epistemic autonomy, the surprisingly novel principle that researchers should respect the rights of people who’ve experienced marginalization to govern knowledge about themselves. They demonstrate the principle firsthand with two of the authors, both trans women, sharing nuanced insights based in their own epistemic autonomy. They also discuss the harms that occur when researchers try to solve complex problems without listening to the people those problems affect.
“You shouldn’t need a PhD and a tenure-track job to be able to push back on people willfully misunderstanding or ignoring your on-the-ground community knowledge,” said Khoury and CAMD Assistant Professor Michael Ann DeVito. “If we really want to make ‘computing for all’ happen, it’s not just about who gets to have a degree in CS. It’s about how all of us respect the input of the folks our systems actually impact.”
Persona-L has Entered the Chat: Leveraging LLMs and Ability-based Framework for Personas of People with Complex Needs
Advocacy, empathy, and an LLM
Lipeipei Sun, Tianzi Qin, Anran Hu, Jiale Zhang, Shuojia Lin, Jianyan Chen, Mona Ali, Mirjana Prpa
If you ask an LLM to create a persona of someone with complex needs like Down Syndrome, the result is often a cringe-inducing mishmash of stereotypes and oversimplifications. Along with being insulting, this shortcoming can cause cascading problems as LLMs take on a larger role in fields like education and training.
These researchers present Persona-L, a novel approach to creating a wider range of LLM personas focusing on abilities rather than disabilities, through an ability-based framework. Persona-L lets users create and chat with characters with a variety of accurately represented complex needs. Evaluations with UX designers found Persona-L could help them better understand and empathize with those users.
“While access to people and their insights is preferred when gathering requirements for building new technologies, sometimes that type of contact may not be feasible, or even ethical,” said Khoury Assistant Teaching Professor Mirjana Prpa. “We aim to build an open-source platform for creating interactive, synthetic personas that can be available at any time, and that can provide wider context to the questions that arise in the design process, in UX practice and HCI classrooms.”
Proactive Conversational Agents with Inner Thoughts
Can AI dream of electric sheep?
Xingyu Bruce Liu, Shitao Fang, Weiyan Shi, Chien-Sheng Wu, Takeo Igarashi, Xiang 'Anthony' Chen
If a computer could think, what would it think about? In the library’s worth of science fiction books that explore the question, the computers are often smooth and lifelike conversationalists, and these researchers believe having thoughts might be the reason they can talk like that.
This research team presents the Inner Thoughts framework, a continuous, covert train of thoughts that an AI runs in parallel to its overt communication processes. When the test models had something they were motivated to express, rather than being limited to reacting to their human conversational partners, the models talked more proactively, coherently, and intelligently, in a more appropriate and lifelike way.
“Existing chatbots are more passive, always waiting for the user to give instructions, which can be a bit frustrating during the conversation. With inner thoughts, they can follow the lead in the conversation, pick up topics mentioned a while back, and chime in at the right moment,” said Khoury Assistant Professor Weiyan Shi. “This research paves the way for AI systems that can anticipate user needs and participate meaningfully in conversations.”
Promises, Promises: Understanding Claims Made in Social Robot Consumer Experiences
For the love of robots
Johanna Gunawan, Sarah Elizabeth Gillespie, David Choffnes, Woodrow Hartzog, Christo Wilson
Human beings are social creatures who will bond with everything from pets to houseplants to Siri. As smart devices get better and better, what responsibilities do manufacturers have to help buyers manage the pains and promises of emotional attachment to their products?
These researchers examined the manufacturer claims, on-device experiences, and consumer reviews for four commercially available social robots to find out whether they deliver on their promises. They found that makers’ promises, consumers’ experiences, and actual robot capabilities all varied widely. They also identified characteristics that can make a social robot or its marketing riskier for its users, which the industry could avoid as the tech develops further.
“It’s all about reducing the risk of consumer harm.” said Johanna Gunawan, who earned her PhD from Khoury College in 2024, and current Khoury PhD student Sarah Gillespie. “Studying an emerging market can help mitigate future problems or inform safer design as these technologies gain popularity.”
Promoting Prosociality via Micro-acts of Joy: A Large-Scale Well-Being Intervention Study
A quick little kindness
Hitesh Goel, Yoobin Park, Jin Liou, Darwin A. Guevarra, Peggy Callahan, Jolene Smith, Bingsheng Yao, Dakuo Wang, Xin Liu, Daniel McDuff, Noemie Elhadad, Emiliana Simon-Thomas, Elissa Epel, Xuhai "Orson" Xu
Acting kind and helpful — or in other words, prosocial — makes you happier and healthier. The science is well established, but the question that remains is how to get people to actually do it.
These researchers deployed BIGJOY, a global study with more than 18,000 participants who each spent a week doing seven daily “micro-acts” — easy and brief actions psychologically designed to promote prosociality and joy. The resulting data provides insight into how the impacts of micro-acts of joy vary across wildly diverse populations, and the potential of small acts of prosociality to improve human well-being at scale. The research team hopes their work can provide a stepping-stone to more large-scale interventions promoting a more unified and compassionate world.
Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots
Preventing oversharing with ChatGPT
Jijie Zhou, Eryue Xu, Yaoyao Wu, Tianshi Li
We frequently tell chat-based LLMs more about our health, finances, and personal struggles than we mean to, risking our privacy in ways we may not fully grasp. But what can we do to keep our personal data safe from our own instinct to share?
These researchers developed Rescriber, a browser extension to help users identify and remove personal data from their chatbot prompts. A group of twelve test users found Rescriber helped them to reduce unnecessary disclosure, protect their privacy, and feel more secure.
“Even though current large language models have layers of protection to avoid leaking data or getting attacked, and some even say they don’t use your data for training — let’s be honest, you never really know,” said Khoury visiting researcher Jijie Zhou. “That’s why preventing oversharing at the source is the smartest move.”
Scaffolding Empathy: Training Counselors with Simulated Patients and Utterance-level Performance Visualizations
Counselor in e-training
Ian Steenstra, Farnaz Nouraei, Timothy Bickmore
It takes practice and feedback to learn to be a good counselor, and the cost and availability of people to play mock patients is a big barrier for trainees. Could LLMs help to fill the gap?
These researchers developed an LLM-powered training system that simulates the cognitive model of a person dealing with alcoholism, with whom trainees could practice skills like motivational interviewing and empathy in a safe, repeatable environment. The application can also provide detailed visual feedback, even on specific words the counselor uses.
“Our work demonstrates that LLMs can effectively simulate alcohol misuse patients and serve as a tool to evaluate counselor performance within a training system,” said Khoury PhD student Ian Steenstra. “It suggests a viable approach for social skills training and opens up avenues for future HCI research into automated, data-driven training systems.”
Socio-Cognitive Framework for Personal Informatics: A Preliminary Framework for Socially-Enabled Health Technologies
Getting healthier, together
Herman Saksono, Andrea G. Parker
If you and the people around you think you can do something you want to do, you’re more likely to do it. So, if you want people to exercise, making sure they’re in a community that can help visualize and support their success is a great way to make that happen.
Building on seven years of research, this team developed the Socio-Cognitive Framework for Personal Health Informatics Systems, a theoretical foundation for developing effective health devices that can work for people on limited incomes. Their framework identifies five socio-cognitive concepts — aspirations, data exposure, stories, belongingness, and impediments — that systems should consider when reimagining health as a social project instead of an individual effort.
“Preventive health behavior is often very hard, and personal informatics tools that rely on individual efforts put a lot of burden on individuals,” said Khoury and Bouvé Assistant Professor Herman Saksono. “Many social barriers such as low wages, housing instability, job instability, and workplace demands can truly limit people in living a healthy life. When we develop health technologies, we should think about how these technologies can counter such social barriers.”
The Impact of Generative AI Coding Assistants on Developers Who Are Visually Impaired
AI coding for developers who are visually impaired
Claudia Flores-Saviaga, Benjamin V. Hanrahan, Kashif Imteyaz, Steven Clarke, Saiph Savage
AI coding tools don’t work quite right if you can’t see what you’re doing. But what particular problems do coders with visual impairments experience, and how could they be fixed?
This research team asked developers who are visually impaired to complete programming tasks using an AI coding assistant. While the participants reported many advantages, they also highlighted accessibility challenges, like the way AI suggestions became excessive and overwhelming when they all had to be read aloud by a screen reader. The researchers suggest “AI timeouts,” a novel concept to help coders who are visually impaired avoid fatigue when using AI assistants.
“Our research findings not only inform design recommendations for building coding assistants for developers who are visually impaired, but also provide valuable insights for designing agentic AI systems that preserve user autonomy and can be effectively designed for diverse user groups,” said Khoury PhD student Kashif Imteyaz.
The Many Tendrils of the Octopus Map
Cephalopods on the map
Eduardo Puerta and Shani Claire Spivak, Michael Correll
Which came first, the octopus map or the conspiracy theory? Their histories have been twisted together for well over a hundred years; cartographers have wrapped tentacles around everything from the Ottoman Empire to the boroughs of New York City to evoke a sinister, grasping central force threatening the existing way of life.
In a paper that grew out of a pandemic hobby, the research team explored the history of the octopus map and the ways that the visual metaphor of an octopus can encourage conspiratorial interpretation. They also found that certain features in data or visual rhetoric can encourage “octopus-like” thinking. The team encouraged designers to carefully analyze how their cephalopod visualizations could contribute to conspiratorial thinking.
“Octopus maps are conspiracy catnip: they let the designer portray a bunch of seemingly unconnected stuff as, in their mind, the sinister machinations of a single overarching threat,” said Khoury Associate Research Professor Michael Correll. “Our results show that there’s a lot of potential power (and so, danger and responsibility) that comes with designing any sort of map or chart. It’s not just a problem for propagandists and political cartoonists.”
UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
AI-powered surrogates for real users
Yuxuan Lu, Bingsheng Yao, Hansu Gu, Jing Huang, Zheshen Jessie Wang, Yang Li, Jiri Gesi, Qi He, Toby Jia-Jun Li, Dakuo Wang
Before launching a new feature or web design, companies need to test how real users will react—but finding people, running the tests, and analyzing results takes time and effort. Worse, if the study itself is poorly designed, it might not give useful answers.
The researchers behind UXAgent created a new way to solve this problem. Instead of relying on real people from the start, their system uses LLM Agent-powered “simulated users” to test websites first to gather early feedbacks. These LLM Agent behave like real people and can provide both quantitative (such as what they clicked) and qualitative (like why they made certain choices). Readers can try the tool here.
“As a researcher, the worst nightmare is realizing your study is flawed a week before the deadline. But there’s no good way to test a usability study before running it—so I’m exploring how LLM agents can help us simulate and refine study designs early on,” said Khoury PhD student Yuxuan Lu.
Why Can’t Black Women Just Be?: Black Femme Content Creators Navigating Algorithmic Monoliths
Black femme creators’ invisible internet
Gianna Williams, Natalie Chen, Michael Ann DeVito, Alexandra To
Many people earn a living on social media, but Black femme content creators know that the extra scrutiny and suppression their content gets makes that harder for them than most. Little research has tried to substantiate their experiences — until now.
This research team interviewed 11 Black femme content creators to find out how they experience social media content moderation, what they do to resist it, and what folk theories they have about TikTok’s algorithm. They found that social media algorithms tend to present a monolithic view by only amplifying images of Black suffering, and that future HCI researchers should center Black joy to more fully understand the experiences of Black online communities.
“We hope this research can name the steadily growing trends Black content creators have been arguing against and resisting on their feeds, and lead to more solutions to aiding Black online communities,” said Khoury PhD student Gianna Williams.