Title: The Microtargeting Manipulation Machine
Abstract: Psychological microtargeting leverages personal data and algorithms to craft highly tailored online messages. For example, voters can be targeted based on personality traits inferred from their digital footprint, with messages designed to exploit specific vulnerabilities. Research shows that microtargeted messages are more persuasive than generic ones. Lewandowsky conducted three experiments using generative AI (ChatGPT) to tailor political messages based on openness to experience, finding that personality-matched texts were judged as more persuasive. He then tested whether warnings—flagging messages as “too close to be true” to one’s personality—could mitigate this effect. Across three further experiments, the warnings proved ineffective, suggesting that transparency alone does not neutralize the manipulative power of microtargeting.
Bio: Professor Stephan Lewandowsky is a cognitive scientist at the University of Bristol whose main interest is in the pressure points between the architecture of online information technologies and human cognition, and the consequences for democracy that arise from those pressure points.
He is the recipient of numerous awards and honours, including a Discovery Outstanding Researcher Award from the Australian Research Council, a Wolfson Research Merit Fellowship from the Royal Society, and a Humboldt Research Award from the Humboldt Foundation in Germany. He is a Fellow of the Academy of Social Science (UK) and a Fellow of the Association of Psychological Science. He was appointed a fellow of the Committee for Skeptical Inquiry for his commitment to science, rational inquiry and public education. He was elected to the Leopoldina (the German national academy of sciences) in 2022. Professor Lewandowsky also holds a Guest Professorship at the University of Potsdam in Germany. He was identified as a highly cited researcher in 2022, 2023, and 2024 by Clarivate, a distinction that is awarded to fewer than 0.1% of researchers worldwide.
His research examines the consequences of the clash between social media architectures and human cognition, for example by researching countermeasures to the persistence of misinformation and spread of “fake news” in society, including conspiracy theories, and how platform algorithms may contribute to the prevalence of misinformation. He is also interested in the variables that determine whether or not people accept scientific evidence, for example surrounding vaccinations or climate science.
He has published hundreds of scholarly articles, chapters, and books, with more than 200 peer-reviewed articles alone since 2000. His research regularly appears in journals such as Nature Human Behaviour, Nature Communications, and Psychological Review. (See www.lewan.uk for a complete list of scientific publications.)
His research is currently funded by the European Research Council, the EU’s Horizon 2020 programme, the UK research agency (UKRI, through EU replacement funding), the Volkswagen Foundation, Google’s Jigsaw, and by the Social Sciences Research Council (SSRC) Mercury Project.
Professor Lewandowsky also frequently appears in print and broadcast media and has contributed around 100 opinion pieces to the global media. He has been working with policy makers at the European level for many years, and he was first author of a report on Technology and Democracy in 2020 that has helped shape EU digital legislation.