We investigate human-centered, responsible AI in the culture and media sectors.
How can we ensure that digital systems are inclusive, promote diversity and can be used to combat misinformation? The HCDA group addresses these important questions. Our work includes a wide range of techniques, such as statistical AI (machine learning), symbolic AI (knowledge graphs, reasoning), and human computation (crowdsourcing). By analyzing empirical evidence of human interactions with data and systems, we derive insights into the impact of design and implementation choices on users.
We maintain close collaborations with professionals from the culture and media sectors, as well as social scientists and humanities scholars, through the Cultural AI Lab and the AI, Media and Democracy Lab. These interdisciplinary labs provide us with opportunities to work with real data and real-world use cases.
Examples of recent research topics: measuring bias and diversity in recommender systems; examining biased (colonial) terminology in knowledge graphs; developing transparent techniques for misinformation detection.