Nederlands

Truth is in the Eyes of the Machines - Symposium

This symposium is part of the Research Semester Programme on Misinformation Detection and Countering in the era of Large Language Models.

When
9 May 2025 from 9 a.m. to 9 May 2025 6 p.m. CEST (GMT+0200)
Where
Turing Hall, CWI, Science Park 125, Amsterdam, Netherlands.
Add

Register here before 1 May 2025 for Truth is in the Eyes of the Machines - Symposium

How do misinformation and hate speech fuel and influence each other? How can sustainable and FAIR data (Findable, Accessible, Interoperable and Reusable) be developed to independently investigate misinformation and hate speech? How effective are generative AI models at detecting and mitigating information disorders? These research questions will guide the Research Semester Programme on Misinformation Detection and Countering in the era of Large Language Models. The Programme will be organized along three days, with workshops, keynotes and breakout sessions. A symposium will take place on 9 May at CWI in Amsterdam.

Other parts of this Research Semester Programme are on 23 May (Groningen) and 20 June 2025 (Amsterdam). The organizers are Davide Ceolin (CWI, Human-Centered Data Analytics), Anastasia Giachanou (Utrecht University, Department of Methodology and Statistics) and Tommaso Caselli (University of Groningen, Jantina Tammes School).

Abstracts & bio's of the speakers

Alba_Garcia

Title: Multimodal Information Access Through Evaluation

Abstract: As artificial intelligence systems increasingly integrate multiple data modalities - text, images, audio, and structured data - the need for robust evaluation frameworks becomes crucial. This talk explores how evaluation campaigns contribute to advancing multimodal information access, ensuring systems are effectively assessed across diverse tasks and domains. Drawing from experiences in large-scale benchmarking initiatives such as CLEF, they will discuss key lessons learned in designing evaluation methodologies for multimodal AI. The talk will also highlight open challenges, including the need to incorporate aspects of fairness, inclusivity, and explainability in future evaluation frameworks to drive the development of more reliable and ethically sound AI systems.

Bio: Dr. Alba García Seco de Herrera is a Distinguished Researcher at the UNED (National Distance University), where she leads efforts in multimodal artificial intelligence, information access, and evaluation. She serves as chair of the CLEF (Conference and Labs of the Evaluation Forum) evaluation initiative, bringing together over 200 research groups worldwide to advance large-scale benchmarking in multilingual and multimodal information access systems. Her research focuses on applying mathematical and computational methods to develop AI-driven technologies that model the complexity of the vast volumes of data available today.

Title: AI ‘News’ Content Farms Are Easy to Make and Hard to Detect

Abstract: Large Language Models (LLMs) are increasingly used as "content farm" models (CFMs), to generate synthetic text that could pass for real news articles. This is already happening even for languages that do not have high-quality monolingual LLMs. Results of a case study in Italian are presented, showing that it is possible to produce news-like texts that native speakers of Italian struggle to identify as synthetic, with only 40K Italian news texts from a public dataset and the first-generation Llama model. At the same time, detecting such texts in the wild is nearly impossible with either methods that rely on token likelihood information or supervised classification. This talk highlights the need for more research on the problem of synthetic text detection, and the ongoing changes in the information ecosphere of the open web.

Bio: Anna Rogers is an Associate Professor in the Computer Science Department at the IT University of Copenhagen. She holds a PhD degree in Computational Linguistics from the University of Tokyo, followed by postdocs in Machine Learning for Natural Language Processing (University of Massachusetts) and in Social Data Science (University of Copenhagen). Her main research area is analysis and evaluation of pre-trained language models. Anna currently serves as an editor-in-chief of ACL Rolling Review, the peer review platform for all major NLP conferences of the Association for Computational Linguistics.

Title: Positive and Negative Aspects of Large Language Models in Tackling Online Disinformation

Abstract: As Large Language Models (LLMs) have recently revolutionized many areas, tackling online disinformation is not an exception. In this keynote, Macko will dive into positive as well as negative aspects of LLMs. First, the emergence of LLMs has heightened concerns about automatic generation and spread of disinformation. Macko will show how he identified and analyzed vulnerabilities of LLMs that can be potentially misused to generate false news articles. A specific focus will be given to the ability to personalize generated content towards specific target audiences. He will follow-up with answering a question whether and how such generated texts can be detected in highly multilingual settings, especially by utilizing LLMs.

Bio: Dominik Macko focuses on robust detection of multilingual machine-generated text, especially in the context of tackling online disinformation. In the past, he was also dealing with anomaly and intrusion detection in IP networks and energy efficiency and security in the Internet of Things environment. He has actively participated in 3 EU-funded research projects, primarily in the research role. Macko has authored or co-authored over 50 publications in scientific journals and conferences, and regularly reviews submissions for renowned international conferences and scientific journals. Formerly, he was a guarantor of the study programme Information Security at the Slovak University of Technology and served as a member of the Scientific Board.

Title: Fact or Fiction: How to Spot and Debunk Misleading Content?

Abstract: Misleading content is especially dangerous to consumers of information since it is very hard to recognize. Examples of such content include false claims on social media backed up with credible (misused) scientific publications, images taken out of their original context with attached false claims or misleading charts encouraging the audience to believe false statements. How to spot and debunk misleading claims? The talk will expose the tricks of misleading content and present some answers.

Bio: Iryna Gurevych's many accolades include being a Fellow of the ACL, an ELLIS Fellow, an ERC Advanced Grant and the Milner Award of the Royal Society. She is Professor of Computer Science at the Technical University of Darmstadt in Germany, an adjunct professor at MBZUAI, UAE and INSAIT, Bulgaria. Her work in AI and natural language processing (NLP) combines deep understanding of human language with the latest paradigms in machine learning. She has made major contributions to establishing the field of argument mining and misinformation detection, among other topics.

Title: Citizen Perspectives and Responses to Misinformation in the Age of Generative AI

Abstract: The emergence of easily accessible generative AI (genAI) tools has sparked a debate about the potential misuse of genAI for creating and disseminating political disinformation. Even though the average citizen comes across false information less often than commonly assumed, fears around false information in general and AI-generated disinformation in particular are relatively high. This talk focuses on the perceived prevalence and threats of AI-generated disinformation in the eyes of citizens, as well as responses aimed at mitigating AI-generated disinformation, like fact-checking, content moderation and literacy interventions. To what extent are such responses effective at addressing disinformation perceptions, and could some remedies be more harmful than the problems they aim to address?

Bio: Marina Tulin is an Assistant Professor of Digital Citizenship at the Amsterdam School of Communication Research in the program group Political Communication & Journalism at the University of Amsterdam. Her research combines interdisciplinary perspectives from communication science, sociology (PhD) and psychology (BA & MA) to investigate responses to political mis- and disinformation, including fact-checking, citizen corrections, and media literacy as well as public trust in knowledge institutions like journalism and science. She is affiliated with the European Digital Media Observatory, which brings together academic researchers, journalists, fact-checkers, tool developers and media literacy experts to tackle the problem of disinformation and strengthen citizens’ resilience to various threats in the context of mis- and disinformation.

Title: Truth Knows No Language: Counteracting Misinformation Beyond English

Abstract: Truthfulness evaluations of large language models (LLMs) have primarily been conducted in English. However, the ability of LLMs to maintain truthfulness across languages remains under-explored. Our study evaluates 12 state-of-the-art open LLMs on a new truthfulness dataset for languages beyond English, comparing base and instruction-tuned models using human, reference-based metrics, and LLM-as-a-Judge evaluation. We also explore the use of counternarratives and its evaluation to counteract hate speech and the generation of Critical Questions to evaluate LLMs' capacity to identify weaknesses in arguments. Collectively, this research suggests LLM truthfulness is multifaceted, requiring factual accuracy, logical reasoning, and critical evaluation. Significant challenges remain in evaluation methodology, cultural sensitivity, and balancing truthfulness with safety measures, underscoring the complexity of defining "truth" in AI systems and the need for continued research to improve LLMs' truthfulness across diverse contexts.

Bio: Rodrigo Agerri PhD in Computer Science (City, University of London) is the Head of the Text Analysis research area at the HiTZ Center. Involved in more than 40 research projects internationally funded, his research is currently focused on LLMs for cross-lingual information extraction and text generation. He regularly publishes and serves as Area Chair in the main NLP conferences (ACL, EMNLP, etc.). At the moment he is the PI of several research projects funded by the Spanish Ministry of Science, including "DISARGUE", on argumentation-based approaches to counteract misinformation" and “DeepMinor”, on the development and benchmarking of LLMs for low-resource languages in multilingual contexts.

Title: The Magnitude of Truth: On Using Magnitude Estimation for Truthfulness Assessment in Fact-Checking

Abstract: Assessing the truthfulness of information is a critical task in fact-checking. It is typically performed using binary or coarse-grained ordinal scales with limited levels, usually from 2 to 6, although fine-grained scales, such as those with 100 levels, have been explored. Magnitude Estimation (ME) takes this approach further by allowing assessors to assign any value ranging from 0 (excluded) to infinity. However, ME introduces challenges, including the aggregation of assessments from individuals with varying interpretations of the scale. Despite these challenges, its successful applications in other domains suggest its potential suitability for truthfulness assessment.

Mizzaro will describe a crowdsourcing study that he conducted to assess the effectiveness of ME in the truthfulness assessment context. He and his colleagues collected assessments from non-experts on claims sourced from the Politifact fact-checking organization. Results show that while aggregation methods significantly impact assessment quality, optimal aggregation strategies yield accuracy and reliability comparable to traditional scales. Moreover, ME allows to capture subtle differences in truthfulness, offering richer insights than conventional coarse-grained scales.

Bio: Stefano Mizzaro is professor at the Department of Mathematics, Informatics, and Physics of the University of Udine, Italy. He has been working for more than 30 years on information retrieval, mainly focusing on effectiveness evaluation. More recently he has also worked on crowdsourcing, artificial intelligence, and misinformation assessment. On these topics he has published more than 150 scientific papers in national and international venues, he has received some grants and awards, and he is currently coordinating the national project “MoT - The Measure of Truth: An Evaluation-Centered Machine-Human Hybrid Framework for Assessing Information Truthfulness”.

Title: The Microtargeting Manipulation Machine

Abstract: Psychological microtargeting leverages personal data and algorithms to craft highly tailored online messages. For example, voters can be targeted based on personality traits inferred from their digital footprint, with messages designed to exploit specific vulnerabilities. Research shows that microtargeted messages are more persuasive than generic ones. Lewandowsky conducted three experiments using generative AI (ChatGPT) to tailor political messages based on openness to experience, finding that personality-matched texts were judged as more persuasive. He then tested whether warnings—flagging messages as “too close to be true” to one’s personality—could mitigate this effect. Across three further experiments, the warnings proved ineffective, suggesting that transparency alone does not neutralize the manipulative power of microtargeting.

Bio: Professor Stephan Lewandowsky is a cognitive scientist at the University of Bristol whose main interest is in the pressure points between the architecture of online information technologies and human cognition, and the consequences for democracy that arise from those pressure points.

He is the recipient of numerous awards and honours, including a Discovery Outstanding Researcher Award from the Australian Research Council, a Wolfson Research Merit Fellowship from the Royal Society, and a Humboldt Research Award from the Humboldt Foundation in Germany. He is a Fellow of the Academy of Social Science (UK) and a Fellow of the Association of Psychological Science. He was appointed a fellow of the Committee for Skeptical Inquiry for his commitment to science, rational inquiry and public education. He was elected to the Leopoldina (the German national academy of sciences) in 2022. Professor Lewandowsky also holds a Guest Professorship at the University of Potsdam in Germany. He was identified as a highly cited researcher in 2022, 2023, and 2024 by Clarivate, a distinction that is awarded to fewer than 0.1% of researchers worldwide.

His research examines the consequences of the clash between social media architectures and human cognition, for example by researching countermeasures to the persistence of misinformation and spread of “fake news” in society, including conspiracy theories, and how platform algorithms may contribute to the prevalence of misinformation. He is also interested in the variables that determine whether or not people accept scientific evidence, for example surrounding vaccinations or climate science.
He has published hundreds of scholarly articles, chapters, and books, with more than 200 peer-reviewed articles alone since 2000. His research regularly appears in journals such as Nature Human Behaviour, Nature Communications, and Psychological Review. (See www.lewan.uk for a complete list of scientific publications.)

His research is currently funded by the European Research Council, the EU’s Horizon 2020 programme, the UK research agency (UKRI, through EU replacement funding), the Volkswagen Foundation, Google’s Jigsaw, and by the Social Sciences Research Council (SSRC) Mercury Project.

Professor Lewandowsky also frequently appears in print and broadcast media and has contributed around 100 opinion pieces to the global media. He has been working with policy makers at the European level for many years, and he was first author of a report on Technology and Democracy in 2020 that has helped shape EU digital legislation.

Programme 9 May

Programme 9 May
Time Subject

09:00-09:30

Registration

09:30-11:00

Session 1 - 3 speakers

  • Iryna Gurevych
  • Marina Tulin
  • Anna Rogers

11:00-11:30

Coffee break

11:30-13:00

Session 2 - 3 speakers

  • Rodrigo Agerri
  • Stephan Lewandowsky
  • Dominik Macko

13:00-14:30

Lunch

14:30-16:00

Panel

16:00-16:30

Coffee break

16:30-18:00

Session 3 - 3 speakers

  • Alba Garcia de Herrera
  • Stefano Mizzaro

18:00

Drinks

Logistics

The conference will be held in the Turing hall at the Congress Centre of Amsterdam Science Park, next to Centrum Wiskunde & Informatica (CWI).
Address: Science Park 125, 1098 XG Amsterdam.

Google Maps Congress Centre, Science Park 125

Please be aware that hotel prices in Amsterdam can be quite steep. We strongly recommend all participants to secure their hotel reservations as early as possible!

Hotel Recommendations:

From these hotels, the venue can be reached in 15-30 minutes with public transport. In all public transportation, you can check in and out with a Mastercard or Visa contactless credit card and also with Apple Pay and Google Wallet.

Registration information

There will be a small fee of €30,- to cover a part of the cost for lunches and refreshments.

Register here before 1 May 2025 for Truth is in the Eyes of the Machines - Symposium

A limited number of grants are available for students and early-career researchers. Priority will be given to participants experiencing financial hardship or lacking institutional support. If a grant recipient does not attend without prior notice, the organization will charge the registration fee. If you would like to apply, please contact: Davide.Ceolin@cwi.nl.

Made possible by

cwi logo sponsor

Info banner Truth in the Eyes of the Machines