When Responsible AI Meets the Human Sensorium in Barcelona
The RAI@CHI meet-up at CHI 2026 in Barcelona (April 13-17) brings together international HCI researchers to establish priorities for responsible AI development. Organized by institutions including IBM Research, University of Nottingham, and Google DeepMind, the event uses an interactive town hall format to address value-sensitive design, explainability, and democratic principles in AI. The gathering represents a foundational response from the CHI community to center human values in an era of accelerating AI deployment.
The conversation about responsible AI has moved beyond position papers and into the architecture of how researchers actually convene. For those shaping what comes next, Human x AI Europe on May 19 in Vienna offers precisely that kind of room.
The Scene in Barcelona
Stand in the Centre de Convencions Internacional de Barcelona this week and notice what's different. The usual conference choreography – badge-scanning, coffee queues, the polite jostling for seats near power outlets – carries a different charge. RAI@CHI 2026, a meet-up hosted by RAI UK (Responsible AI UK), has drawn researchers from São Paulo to Austin, from Glasgow to Cairo, into a single room to ask a question that sounds simple but isn't: What does the CHI community actually want to do about responsible AI?
The ACM CHI Conference on Human Factors in Computing Systems (CHI, pronounced "kai") has always occupied a peculiar position – positioned, as the organizers note, "at the intersection of technology, people, society, and values." This year's Barcelona gathering, running April 13-17, 2026, makes that intersection visible in ways that matter for anyone building, regulating, or investing in AI systems.
The Architecture of the Conversation
What makes RAI@CHI worth attention isn't just its topic but its format. The meet-up employs what the organizers call an "interactive town hall" structure: community standups, democratic topic selection, and structured roundtable discussions. This is design as methodology – an acknowledgment that how researchers convene shapes what they can think together.
The organizing committee reads like a map of where responsible AI research actually happens. Neelima Sailaja from the University of Nottingham's Mixed Reality Laboratory. Simone Stumpf from the University of Glasgow's School of Computing Science. Rishub Jain from Google DeepMind. Heloisa Candello from IBM Research in São Paulo. Neha Kumar from Georgia Tech. Min Kyung Lee from the University of Texas at Austin.
The programme committee extends this geography further – Barcelona Supercomputing Center, King's College London, Eindhoven University of Technology, the Artificial Intelligence Research Institute in Barcelona, and researchers from Egypt, Malaysia, and beyond. The "Across Borders" in the title isn't decorative.
What's Actually Being Discussed
The meet-up's stated outcomes include "network building, knowledge sharing, and concrete collaboration opportunities." But the real work lies in the framing: establishing priorities for "the next generation of RAI research that centers human values, promotes fairness, and advances democratic principles."
This language matters. "Democratic principles" isn't a phrase that appears casually in technical research agendas. It signals a particular understanding of what responsible AI requires – not just ethical guidelines or technical fixes, but attention to who gets to participate in shaping these systems.
The broader CHI 2026 programme provides context for this conversation. Sixty-nine accepted workshops address everything from "AI CHAOS! 2nd Workshop on the Challenges for Human Oversight of AI Systems" to "Ethics at the Front-End: Responsible User-Facing Design for AI Systems" to "Participation, Procurement & Proof of Impact in Public Sector AI Innovation."
Notice the pattern: oversight, ethics, participation, procurement. These aren't abstract concerns. They're the operational questions that policymakers, public sector technologists, and governance scholars face daily.
The IBM Research Thread
IBM Research's involvement in RAI@CHI connects to a broader pattern of human-centered AI work. At CHI 2025, IBM's Human-Centered Trustworthy AI team presented research on abstraction alignment, LLM evaluation interfaces, and what they called "Responsible Prompting Recommendation" – a system designed to promote responsible AI practices during the prompting process itself.
Heloisa Candello, who serves on both the RAI@CHI organizing committee and as a senior research scientist at IBM Research Brazil, co-authored work on "Emerging Data Practices: Data Work in the Era of Large Language Model." That research examined how uncertainty, data practices, and reliance mechanisms change across LLMs' development cycle – the kind of empirical grounding that responsible AI frameworks require.
The Wider Landscape at CHI 2026
Thirty-two meet-ups have been accepted for CHI 2026, and the list reveals what the HCI community considers urgent. "AI and the Self: Exploring Identity, Agency, and Relational Personhood." "Planetary boundaries and data in HCI." "Neurodiversity Meet-Up @ CHI: Building a Neuro-Affirming Community in HCI."
Georgia Tech's presence at CHI 2026 includes nine award papers – two Best Papers and seven Honorable Mentions – with research spanning from "Localized Imaginaries, Global Assets: Sociotechnical Imaginaries and the Assetization of Data Centers in Singapore" to "Promise or Peril? Exploring Black Adults' Perspectives on the Use of Artificial Intelligence in Health Contexts."
The latter, a Best Paper, examined how AI in health can help or hinder care for minority populations. This is responsible AI as lived experience, not abstraction.
What Gets Naturalized
The question that hovers over RAI@CHI – and over CHI 2026 more broadly – is what becomes normal. Every interface encodes assumptions about who users are, what they want, and what they're capable of deciding. Every AI system embeds choices about whose values count.
A collection of CHI 2026 preprints compiled by researcher Daniel Buschek reveals the texture of current concerns: "The Siren Song of LLMs: How Users Perceive and Respond to Dark Patterns in Large Language Models." "When Stereotypes GTG: The Impact of Predictive Text Suggestions on Gender Bias in Human-AI Co-Writing." "Interaction Context Often Increases Sycophancy in LLMs."
These aren't edge cases. They're the default behaviors that emerge when systems are built without sustained attention to human values.
The Diagnostic Moment
RAI@CHI represents something specific: a community attempting to establish shared priorities before those priorities are established for them by market forces, regulatory pressure, or technological momentum. The meet-up's town hall format – with its democratic topic selection and structured roundtables – is itself a statement about how responsible AI governance might work.
For policymakers, the gathering offers a window into what researchers consider tractable and urgent. For investors, it signals where the field's attention is moving. For public sector technologists, it provides frameworks for thinking about procurement and implementation. For governance scholars, it demonstrates how technical communities organize themselves around normative questions.
The artifacts produced – the networks built, the priorities established, the collaborations initiated – will shape what responsible AI means in practice over the coming years. Barcelona this week isn't just hosting a conference. It's hosting a negotiation about what counts as human-centered in an era when that phrase could mean almost anything.
Frequently Asked Questions
Q: What is RAI@CHI 2026?
A: RAI@CHI 2026 is a meet-up hosted by RAI UK (Responsible AI UK) at the ACM CHI Conference on Human Factors in Computing Systems in Barcelona, April 13-17, 2026. It brings together international HCI researchers to establish priorities for responsible AI development through an interactive town hall format.
Q: Who is organizing the RAI@CHI meet-up?
A: The organizing committee includes researchers from the University of Nottingham, University of Glasgow, Google DeepMind, IBM Research Brazil, Georgia Tech, University of Texas at Austin, and King's College London. Key organizers include Neelima Sailaja, Simone Stumpf, Rishub Jain, and Heloisa Candello.
Q: How does the RAI@CHI meet-up format work?
A: The meet-up uses an interactive town hall format featuring community standups, democratic topic selection, and structured roundtable discussions. Participants do not need to submit materials in advance, and the session runs for up to 90 minutes.
Q: What topics does RAI@CHI 2026 address?
A: The meet-up covers value-sensitive design, human-centered AI, ethics, explainability, trustworthy AI, sustainability, and equitability. The goal is to establish priorities for responsible AI research that centers human values and advances democratic principles.
Q: How many workshops are at CHI 2026?
A: CHI 2026 has accepted 69 workshops covering topics from AI oversight challenges to participatory data governance. Workshop sessions run either 90 minutes (short) or two 90-minute sessions with a break (long).
Q: What is IBM Research's involvement in responsible AI at CHI?
A: IBM Research contributes through organizing committee member Heloisa Candello and through research on human-centered trustworthy AI, including work on LLM evaluation, abstraction alignment, and responsible prompting recommendations presented at previous CHI conferences.