CWI research leads to better understanding of AI classification errors

The future of artificial intelligence (AI) looks bright, but AI is not infallible. In her PhD thesis, CWI researcher Emma Beauxis-Aussalet provides methods to make classification errors more transparent and understandable for end-users with limited expertise in AI.

Publication date
28 Jan 2019

Artificial intelligence (AI) keeps bringing opportunities to our professional and personal lives. At the same time, the related risks are increasing too. Would a medical AI declare you are healthy while you are sick? Would a financial AI deny you a loan while you are reliable? Would a text mining AI misrepresent the content of your documents? In her PhD thesis, CWI researcher Emma Beauxis-Aussalet provides methods to make classification errors more transparent and understandable for end-users with limited expertise in AI. On 28 January she will publicly defend her thesis, titled Statistics and Visualizations for Assessing Class Size Uncertainty.


Classification problems in AI can affect any of us. More importantly, we might be using such classification systems ourselves, while having limited knowledge of their errors. So how can non-expert end-users assess classification errors when choosing or using classification systems?

Transparent and understandable for non-experts
In her PhD thesis, Emma Beauxis-Aussalet provides methods to make classification errors more transparent and understandable for end-users with limited expertise in AI. She developed simplified visualizations that support non-experts who need to compare or tune classifiers.

She also provides statistical methods for estimating the numbers of errors to expect when using classifiers. Indeed, end-users may be provided with error measurements performed on test data, prior to using classifiers. But until now they usually were unaware of errors that could still be expected to occur in practice. Beauxis-Aussalet developed new variance estimation methods for such purposes.

Future work
Her studies also highlight issues that remain unaddressed, regarding  the visualization and the mathematics of classification error assessment. She urges for future work focusing on end-users’ practical needs. Bauxis-Aussalet carried out her research in the Information Access group, at Centrum Wiskunde & informatica (CWI) in Amsterdam.

More information
Read Emma Beauxis-Aussalet’s PhD thesis: Statistics and Visualizations for Assessing Class Size Uncertainty

This research was part is CWI’s Classee project

The two most relevant papers of this research are:
Extended methods to handle classification biases. E. Beauxis-Aussalet and L. Hardman. Presented at the International Conference on Data Science and Advanced Analytics (DSAA). 2017.
Supporting End-User Understanding of Classification Errors. E. Beauxis-Aussalet, J. van Doorn and L. Hardman. Presented at the European Conference on Cognitive Ergonomics (ECCE). 2018

This research was conducted within the Fish4Knowledge project, and resulted in the publication of two book chapters in Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data