Understanding Bias in AI - a wide lens

The symposium is meant for researchers from various disciplines with an interest in Responsible AI.

When
7 may 2026 from 1 p.m. to 7 may 2026 5:45 p.m. CEST (GMT+0200)
Where
CWI, Science Park 123, Amsterdam
Add

The event is jointly organized by CWI and Maastricht University to strengthen collaborations regarding responsible AI across the country. The symposium is in the context of an extended research visit of Nava Tintarev to CWI. She is a Full Professor in Explainable AI at Maastricht University in the Department of Advanced Computing Sciences (DACS). Read more about our visiting programme.

About the symposium

In the symposium 'Understanding Bias in AI' we intend to adopt a broad, systemic lens that sees AI as a component within a complex sociotechnical system involving institutions, actors and processes. The lifecycle of an AI system is shaped by institutional and operational contexts which influence decisions at each step: 1) the selection and curation of training data, 2) the methodologies by which models are trained and validated, 3) how the system is deployed (e.g., on which instances, for which task etc. ), and finally, 4) the manner in which its outputs are communicated to and interpreted by users.

At each step of this lifecycle, bias may take root, propagate, or be inadvertently amplified. With this symposium, we hope to create a space for critical, cross-disciplinary reflection on how bias manifests across the AI lifecycle, and the interplay between human and machine biases.

Organizers

Organizers are Laura Hollink (CWI, Utrecht University) and Nava Tintarev (Maastricht University).

Program 7 May

Program
12.45-13.00 Walk-in and registration
13.00-13.10 Introduction by Laura Hollink
13.10-14.00 Keynote by Nava Tintarev, Whom are you Explaining to, and Why?
14.00-14.30 Break
14.30-16.00 Presentations:
- Hilde Weerts - Decoding algorithmic fairness
- Delfina Martinez Pandiani - Tracing Bias? Following Glitches in Iterative Image Generatio
- Mark Keane - How Real is My Bias: Cognitive Biases Need Robust Expression & Verified Mechanisms
- Flor Miriam Plaza del Arco, tbd
16.00-17.00 Discussion plenary (on basis of statements)
17.00-17.45 Drinks

Registration is free but obligatory, please register here for Understanding Bias in AI - a wide lens

Speakers bio's and abstracts

Prof. Nava Tintarev is a Full Professor in Explainable AI at Maastricht University in the Department of Advanced Computing Sciences (DACS).

Her interdisciplinary work bridges computer science and human-centered evaluation, focusing on making AI systems more transparent and increasing user control. Prof. Tintarev specializes in developing interactive explanation interfaces for recommender systems and search technologies, emphasizing user empowerment and decision support.

She is a lab director of the ICAI TAIM lab, working on trustworthy AI in media. She was a founding principal investigator of ROBUST, a €87 million Dutch national initiative advancing trustworthy AI. Her research has also been supported by major organizations, including IBM, Twitter, and the European Commission.

Recognized as a Senior Member of the ACM in 2020, Prof. Tintarev's team has received multiple best paper awards for research contributions at CHI, CHIIR, Hypertext, UMAP, and HCOMP. She is active in Dutch research policy as the Chair of the round table member for informatics, and as a board member of the ICT-Research Platform Netherlands (IPN).

Bio
Hilde Weerts is assistant professor at Eindhoven University of Technology, where she researchers the challenges that come along with putting responsible AI into practice, particularly in relation to algorithmic fairness an non-discrimination. Her research is highly interdisciplinary, at the intersection of computer science and law. She is also one of the maintainers of Fairlearn, an open-source, community-driven project aimed at helping data scientists assess and improve the fairness of AI systems.

Decoding algorithmic fairness

Abstract: The rapid adoption of algorithmic decision-making systems in high-impact domains as varied as public administration, healthcare, and finance, has raised concerns regarding infringements of fundamental rights, such as the right to non-discrimination. Over the past decade, computer scientists have responded with an accruing number of fairness-aware machine learning (fair-ml) methods, intended to ensure that individuals who are affected by the predictions of a machine learning model are treated fairly. Traditionally, these methods frame fairness as an optimization task, where the objective is to achieve high predictive performance under some quantitative fairness constraint. Unfortunately, in practice, the vast majority of these approaches fail to actually produce less discriminatory algorithms. In this talk, I will dive deeper into the reasons the current 'fair machine learning' paradigm has failed to produce meaningful paths towards non-discriminatory algorithms. Considering "fairness" through the lens of European Union non-discrimination law, I will argue that a shift in focus is needed. In particular, one deceptively simple question, central to the protection of fundamental rights, remains strikingly underexplored: do algorithms actually work?

Bio
Working backwards from synthetic outputs through the AI lifecycle, Pandiani treats iterative generations as a (sometimes informal and sometimes highly technical) diagnostic tool to analyze how visible glitches emerge from interactions between deployment choices and interface design (prompting patterns, viral circulation), model training and validation (latent space interpolation, gradient-based optimization, iterative refinement), and the selection and curation of training data (including platform-specific aesthetics such as a “sepia bias” and entrenched representational stereotypes).

In her talk Pandiani argues that many patterns labelled as ideological intent are better understood as the intersection of dataset regularities, model architectures, and human expectations shaped by sociotechnical contexts. Rather than viewing bias as a single property located at one stage, she proposes bias as a layered phenomenon spanning data, models, deployment, and public discourse, and shows how overextending the term “bias” can obscure both the technical mechanisms and the human–machine interplay through which these generative systems actually produce their outputs.

Abstract
This talk starts where technical discussions of bias in AI often end: with the ways model outputs are interpreted by users. Viral glitch trends in generative image models - such as prompting loops that ask for an “exact replica” of a photo of a white woman and gradually produce a drift across genders, races, and bodies - circulate on social media as evidence of racial or “woke” bias in supposedly neutral systems.

These reactions locate bias primarily in visible outputs and reactionary narratives, and rely on assumptions that identity categories are stable and violated when crossed. Using these sequences as a case study, the talk explores what, exactly, we mean when we describe such behaviours as “bias,” and develops a queer, technically grounded reading of glitches in synthetic images as moments where human categorical reasoning encounters machinic statistical generalization in high-dimensional latent spaces.

Bio
Mark Keane has a PhD in Cognitive Psychology from Trinity College Dublin (TCD, 1987) and a BA in Psychology from University College Dublin (UCD, 1982). After Post-Docs at Queen Mary University of London and the Open University, he was a Lecturer in Psychology at Cardiff University and a Lecturer in Computer Science at TCD (FTCD, 1994) before taking up the Chair of Computer Science at UCD (in 1998). He was Director of ICT (2004-2006) and Director General (2006-2007) of Science Foundation Ireland (SFI) where he oversaw a €700M research investment. He was an advisor to the Irish Government on its €3.7B Strategy for Science, Technology & Innovation (SSTI). From 2007-2009, he was VP of Innovation & Partnerships at UCD.

He has published 200+ articles and 20+ books that have attracted 20,000+ citations  (including a Cognitive Psychology textbook with M.W. Eysenck now in its 8th edition, with 6 foreign language editions). Keane’s work spans the cognitive sciences and AI studying analogy and metaphor, creativity, surprise, understanding and explanation. His recent work on eXplainable AI (XAI) is inherently interdisciplinary as it develops novel AI explanation strategies that are then tested in psychological experiments (across recent papers published in AAAI, IJCAI, IUI, AIES, AIJ and KBS).

His group has won several best paper awards for work in Case Based Reasoning (IJCAI-95, EWCBR-96, ICCBR-95) and example-based XAI (ICCBR-19, ICCBR-20, ICCBR-21, ICCBR-22). He was founding director of the Smart Media Institute (1999) at UCD, deputy director of the €40M Vistamilk SFI Research Center (2018) and is currently a PI in Insight Centre for Data Analytics.

Abstract
In cognitive psychology, many well-worn biases have been discovered, often from programs of research carried out over decades, as in the classic thinking biases such as availability, anchoring, confirmation and so on. Most of these biases have been extended to situations in which people use AI systems or need to have such systems explained to them (as in XAI).

In many of these cases, the algorithmic/heuristic basis of the bias is well-specified, modelled and robustly generalises to new contexts. However, in some cases, the identified bias may be quite superficial; the bias behaviour may look the same but actually reflect several distinct and different cognitive mechanisms. Furthermore, even when the cognitive mechanism is well-specified, the expression of the bias may not be robust; that is, minor modifications to the context/instructions/items can alter the expression of the bias or even remove it all together.

I consider a few cases where this type of situation arises in (i) people’s predictions of future, unexpected events (which shows a strong Negativity Bias) and (ii) people’s choice of recommended links to search-engine queries (which appear to show a Positional Bias). In such situations it is critical to attempt to explicitly disconfirm the defined mechanism with careful user testing,  in order to assess the bias’s robustness or, indeed, determine whether it is “real” at all (otherwise, we ourselves just end up committing a confirmation bias).

tba