Dashboard to expose bias in library AI systems

Researchers from the Human-Centered Data Analytics (HCDA) group are collaborating with software companies to develop a dashboard that reveals bias in library recommender systems. The project has received €123,900 in funding from the SIDN Fund and Topsector ICT.

Publication date
23 Jun 2025

Recommender systems increasingly shape our digital choices – from the books we read to the music we listen to and the news we consume. They provide personalised suggestions based on reading or listening behaviour. However, these AI systems are not as neutral as they may seem. They have been criticized for reinforcing stereotypes or creating filter bubbles.

Libraries place great importance on public values such as inclusivity and are therefore cautious in adopting automated recommendation tools. Concerns about hidden biases in such systems have led to a need for greater transparency. In response, the HCDA group is working with software company Bookarang BV, developer Simon Dirks, and the National Library of the Netherlands (KB) to create a diagnostic dashboard. This tool will analyse borrowing behaviour and recommended book lists, with specific attention to potential biases against certain groups of authors. The ultimate goal is for libraries and ICT companies to use the dashboard to make their digital services more transparent, fair, and human-centred.

About the project

The project, DiBiLi (Diagnosing Bias in Library Recommender Systems), builds on previous work within the Cultural AI Lab, including Savvina Daniil’s PhD research on responsible recommendation systems. It also aligns with the KB’s broader digital transformation, which includes the development of a personalised online library platform.

From CWI, HCDA group leader Laura Hollink and Savvina Daniil are involved in the dashboard project.
"In DiBiLi, we will have the chance to apply my research in a practical setting and contribute to KB's digital transformation”, Danii says. “I am grateful to be able to continue working at HCDA and collaborate with interesting partners who have an expertise in library systems and diagnostic tools."

About the funding

The funding comes from the “Responsible AI in Practice” call, a joint initiative of the SIDN Fund and Topsector ICT. This call focuses on developing practical frameworks, conditions, and design principles for responsible AI and AI-based solutions. These solutions may target a specific sector or societal challenge. Project proposals had to be submitted by a research institute in collaboration with at least one company. Out of 39 submissions, 10 projects were selected, including DiBiLi.

About the researchers

Laura Hollink

Laura Hollink specializes in responsible AI for the culture and media sectors. Examples of recent research include: measuring fairness and diversity in recommender systems; examining biased (colonial) terminology in knowledge graphs; and discovering bias in the output of generative AI.

Hollink leads the Human-Centered Data Analytics group and is a member of the management team of CWI. She is also co-director of the Cultural AI Lab and participant in the AI, Media and Democracy Lab.

portrait of Laura Hollink

Savvina Daniil

Savvina Daniil is a PhD student at the Human-Centered Data Analytics group. She works on bias in recommender systems; information filtering systems that provide suggestions for items that are most pertinent to a particular user.

Daniil attempts to trace which parts of the optimization process and which data characteristics bring about this bias. In 2023 she won the first prize at ICT.OPEN for her poster on the source of bias in recommender systems.

Savvina Daniil with her poster prize

Header photo: Unsplash