Nazanin Andalibi, currently a Scholar-in-Residence at Digital Futures, is an Associate Professor at the University of Michigan’s School of Information and an influential scholar in critical social computing and human–computer interaction. Her research examines how sociotechnical systems—especially AI and social media—shape experiences of marginality, privacy, and justice. With major funding from the National Science Foundation and publications in leading venues such as ACM CHI and ACM CSCW, Andalibi’s work has influenced both academic debates and technology policy.
During her time at Digital Futures, hosted by Kia Höök at KTH Royal Institute of Technology, she is focusing on a forthcoming book about the ethical, social, and political implications of emotion AI.
In the interview below, Andalibi discusses her work on AI, marginality, and justice.
What motivated your research focus on marginality and justice in sociotechnical systems, particularly in relation to AI and social media?
– My motivation has always been deeply rooted in my personal values and aspirations, particularly a commitment to equity, care, and amplifying voices that are too often overlooked. When I talk about marginality, I’m referring both to marginalized identities and to marginalized experiences.
Some of my earlier and ongoing work has focused on reproductive health and reproductive justice, including experiences such as pregnancy loss and abortion. These experiences not only carry social stigmatization but are also unevenly distributed across lines of identity such as race, gender, and socioeconomic status. Attending to these differences has been central to how I approach research: not just asking what technologies do, but for whom they do not work, and whose needs and realities are excluded.
This is what ultimately drives my focus on justice in AI and social media. These systems are not neutral; they encode values, priorities, and assumptions. My work is about making these dynamics visible and, importantly, about imagining and advocating for more just alternatives. At its core, I find both purpose and joy in learning from people’s lived experiences and in working toward sociotechnical futures that better reflect and support the full spectrum of those experiences.
Your work critically examines emotion AI. What are the most significant ethical or societal risks you see in systems that attempt to infer human emotions?
– There are many concerns, depending on what the tool actually does, how it is used, and who it is used on. That said, recurring themes include:
- Questionable validity: These systems assume emotions can be reliably inferred from signals like facial expressions or voice, but emotional expression is contextual, culturally shaped, and often strategically managed, especially in stigmatized or sensitive situations.
- Techno-solutionism: Emotion AI is often framed as a fix (i.e., a techno-solution) for complex social problems (e.g., worker well-being, fair hiring), but it can obscure root causes and exacerbate inequities—and even the very problems its proponents claim to address.
- Surveillance and power: The use of emotion AI extends monitoring into intimate domains, often without meaningful consent, shifting interpretive authority from individuals to institutions and raising concerns about control, accountability, and misuse.
- Intensified emotional labor: The use of emotion AI can pressure people—especially those already marginalized (e.g., women and disabled people)—to perform “acceptable” affects, amplifying long-standing inequitable expectations around emotional labor in workplaces and on platforms.
Across these concerns, the broader issue is how emotion AI reshapes agency and power.
You’ve received substantial funding and recognition, including a CAREER award from the National Science Foundation. How has this support shaped the direction and impact of your research?
– I feel very fortunate to have received this support, and I’m deeply appreciative of it. Awards like the NSF CAREER have enabled me to train and mentor PhD students and postdoctoral researchers, and to invest in the kind of careful work that this area requires. This support has also made it possible to share our work more broadly through conferences and collaborations, and to compensate participants for their time and expertise.
During your residency at Digital Futures, what kinds of collaborations or conversations are you most excited about—especially with scholars working on feminist and justice-oriented approaches to technology?
– I’ve already had the opportunity to engage in some really meaningful collaborations during my time at Digital Futures. I recently co-led a workshop titled “Designing Care, Refusing Harm: Feminist Red and Green Lines for AI in Gendered Health,” which brought together scholars and practitioners working on related topics. It was a fantastic space, full of thoughtful, critical, and generative conversations. We’re planning to continue that work and share outputs more broadly.

I’m especially excited about connecting with researchers working on gendered and chronic health, pain, and identity. One of my projects on endometriosis and technology aligns closely with much of the work happening across Digital Futures, KTH, and the broader Swedish research community. There’s a real depth of expertise and commitment to feminist and critical research here, and I find it both energizing and inspiring.
If anything, the challenge is that there’s so much relevant and exciting work happening that I won’t have enough time to engage with all of it as deeply as I’d like. But that also speaks to the strength of the community, and I’m really grateful to be part of it.
Your forthcoming book explores AI systems that claim to predict human affect. What key insights or arguments can readers expect, and why is this topic particularly urgent right now?
– I was hoping you wouldn’t ask about it. I am very much still “in the mess” of writing, which, as my editor reminds me, is normal for anyone who has ever written a book. That said, the book examines what’s at stake when technologies claim to infer or predict human affect—qualities that are neither easily measurable nor fully knowable. I explore the politics of these systems and show how they carry social and ethical consequences. At the same time, I consider what alternative approaches we might imagine at the intersection of affect and technology.
This work feels especially urgent now, given the rapid expansion of AI, including generative AI, into everyday life. Affective sensing and interactions are increasingly integrated into platforms and decision-making systems, shaping how people are seen, evaluated, and managed. The book is, in many ways, an invitation to pause and critically reflect on what it means for technologies to engage with something as complex and deeply human as emotion.
Like to learn more? Recorded presentation, from Nazanin Andalibi’s seminar on “Emotion AI: Utopian Promises, Dystopian Realities” can be found here.

