The rapid advancements in artificial intelligence raise important questions for the cultural and media sectors, explains Hollink. “Enormous strides have been made, particularly in language models, but also in personalization, recommendation systems, and decision support. Many organizations are now asking themselves: does the way we apply these technologies align with the values and norms we hold dear?”
There are legitimate concerns that AI systems may reinforce biases, lack privacy safeguards, and reduce human oversight. “Diversity and inclusion are core pillars of the cultural sector; it is crucial that everyone feels seen. While the sector has a wealth of knowledge in this area, the technology has not yet caught up.”
Libraries
Hollink cites recommendation systems in libraries as an example. In such systems, authors who receive more clicks become more visible to the public. This is a well-known phenomenon called popularity bias. “We found that this reduces the cultural diversity of recommendations. Libraries aim to serve their readers, and are not helped by systems that steer them towards a less diverse selection.”
News media also use AI, for instance to generate headlines, short descriptions, and summaries. “Even in these ‘small’ tasks, there is often a systematic bias in the models,” says Hollink. “Consider, for example, a more negative, stereotypical portrayal of people with queer identities. Organizations need to be aware of this when deploying AI.”