Human-Centered Data Analytics
The HCDA group focuses on exploring human-centered, responsible AI in culture and media to ensure inclusive digital systems that promote diversity and combat misinformation.
We investigate human-centered, responsible AI in the culture and media sectors.
How can we ensure that digital systems are inclusive, promote diversity and can be used to combat misinformation? The HCDA group addresses these important questions. Our work includes a wide range of techniques, such as statistical AI (machine learning), symbolic AI (knowledge graphs, reasoning), and human computation (crowdsourcing). By analyzing empirical evidence of human interactions with data and systems, we derive insights into the impact of design and implementation choices on users.
We maintain close collaborations with professionals from the culture and media sectors, as well as social scientists and humanities scholars, through the Cultural AI Lab and the AI, Media and Democracy Lab. These interdisciplinary labs provide us with opportunities to work with real data and real-world use cases.
Examples of recent research topics: measuring bias and diversity in recommender systems; examining biased (colonial) terminology in knowledge graphs; developing transparent techniques for misinformation detection.
Events
All events-
StartEnd5th Multidisciplinary International Symposium on Disinformation in Open Online Media
-
StartEndThe 14th IFIP International Conference on Trust Management (IFIPTM 2023) will be held in Amsterdam, The Netherlands from 18 to 20 October 2023.
Members
Associated members
Publications
All publicationsCurrent projects with external funding
- Culturally aware AI (AI:CULT)
- Responsible Recommenders in the Public Library (PPS Koninklijke Bibliotheek)
- The eye of the beholder: Transparent pipelines for assessing online information quality (The eye of the beholder)