December 2021. In the Netherlands, the Rutte IV cabinet is being formed. Preacher Willem-Jan van Gent compiles a list of potential cabinet members through some guesswork, and shares his prediction on Twitter. Soon, the list takes on a life of its own: various media outlets mention the names on the list, often omitting the fact that the list is a prediction from someone outside the Hague politics. One of the people on the list is the mayor of Haarlem, who has to justify his potential departure to the cabinet in the city council.
This is an example of misinformation: false or misleading information that is not well understood and is spread without harmful intentions. Additionally, there are numerous examples of disinformation, the deliberate spread of misleading information, often with malicious intentions. Recent examples range from fake news about COVID 19, to attempts to manipulate the war in Ukraine or the outcome of the US presidential elections. Social media play a significant role in the dissemination of such fake news.
The Multidisciplinary International Symposium on Disinformation in Open Online Media (MISDOOM 2023) addresses this kind of issues. It brings together industry professionals and practitioners from various disciplines such as communication science, computer science, computational social science, political science, psychology, journalism, and media studies.
Topics discussed range from cross-platform campaigns and their impact (e.g. diffusion of disinformation and manipulation, hate speech), to approaches to studying misinformation, counter-measures for mis- and disinformation (censorship policies, education, legal actions), and automated fact-checking and misinformation detection. Another topic addresses generative AI tools – like ChatGPT, Google Bard and text to image generator DALL-E – and disinformation.
The program of the symposium consists out of 6 sessions. These sessions have general themes like fact-checking & debunking, generative AI & fake news, and polarization & bias. One of the questions that is raised, is for example: how can generative AI like Chat GPT contribute to detecting fake news? But also: can large language models (LLMs) generate human-like opinions?
Keynote speakers are Judith Möller (professor at University of Hamburg & Leibniz-Institute for Media Research) and postdoc Pepa Atanasova (University of Copenhagen). Möller is professor in Empirical Communication Research, especially Media Use and Social Media Effects. Her research focuses on the effects of political communication, especially in social media. Atanasova will be talking about the accountability and explainability of machine learning models for mis- and disinformation.