Bio
Mark Keane has a PhD in Cognitive Psychology from Trinity College Dublin (TCD, 1987) and a BA in Psychology from University College Dublin (UCD, 1982). After Post-Docs at Queen Mary University of London and the Open University, he was a Lecturer in Psychology at Cardiff University and a Lecturer in Computer Science at TCD (FTCD, 1994) before taking up the Chair of Computer Science at UCD (in 1998). He was Director of ICT (2004-2006) and Director General (2006-2007) of Science Foundation Ireland (SFI) where he oversaw a €700M research investment. He was an advisor to the Irish Government on its €3.7B Strategy for Science, Technology & Innovation (SSTI). From 2007-2009, he was VP of Innovation & Partnerships at UCD.
He has published 200+ articles and 20+ books that have attracted 20,000+ citations (including a Cognitive Psychology textbook with M.W. Eysenck now in its 8th edition, with 6 foreign language editions). Keane’s work spans the cognitive sciences and AI studying analogy and metaphor, creativity, surprise, understanding and explanation. His recent work on eXplainable AI (XAI) is inherently interdisciplinary as it develops novel AI explanation strategies that are then tested in psychological experiments (across recent papers published in AAAI, IJCAI, IUI, AIES, AIJ and KBS).
His group has won several best paper awards for work in Case Based Reasoning (IJCAI-95, EWCBR-96, ICCBR-95) and example-based XAI (ICCBR-19, ICCBR-20, ICCBR-21, ICCBR-22). He was founding director of the Smart Media Institute (1999) at UCD, deputy director of the €40M Vistamilk SFI Research Center (2018) and is currently a PI in Insight Centre for Data Analytics.
Abstract
In cognitive psychology, many well-worn biases have been discovered, often from programs of research carried out over decades, as in the classic thinking biases such as availability, anchoring, confirmation and so on. Most of these biases have been extended to situations in which people use AI systems or need to have such systems explained to them (as in XAI).
In many of these cases, the algorithmic/heuristic basis of the bias is well-specified, modelled and robustly generalises to new contexts. However, in some cases, the identified bias may be quite superficial; the bias behaviour may look the same but actually reflect several distinct and different cognitive mechanisms. Furthermore, even when the cognitive mechanism is well-specified, the expression of the bias may not be robust; that is, minor modifications to the context/instructions/items can alter the expression of the bias or even remove it all together.
I consider a few cases where this type of situation arises in (i) people’s predictions of future, unexpected events (which shows a strong Negativity Bias) and (ii) people’s choice of recommended links to search-engine queries (which appear to show a Positional Bias). In such situations it is critical to attempt to explicitly disconfirm the defined mechanism with careful user testing, in order to assess the bias’s robustness or, indeed, determine whether it is “real” at all (otherwise, we ourselves just end up committing a confirmation bias).