Recommender systems are information filtering systems that provide suggestions for items that are most pertinent to a particular user. They refer to various decision-making processes, such as what product to purchase, what music to listen to, or what online news to read. Quite handy when you need to choose an item from a potentially overwhelming number of items that are offered.
But recommender systems can be biased. Bias in recommender systems may negatively affect users as well as producers. In recent years, many examples have been reported of recommender systems that discriminate against, for example, people of certain genders or race. More knowledge is needed about how to measure and mitigate bias in recommender systems.
Making bias visible
Recently, there has been substantial research undertaken on the role of fairness in recommendation systems. Sudniks thesis draws attention to variables affecting fairness evaluation of a recommendation system such as chosen definition of fairness, learning algorithm in use, ranking positions considered, and the amount of feedback loops already computed. With a focus on book recommendations and female writers as the minority groups, Sudnik has designed a clever experimental set-up with several models and fairness metrics.
The reviewers from Amsterdam Data Science were impressed by the work and considered the choice of methods to be excellent. “The thesis takes a clear approach to make such a bias visible, and provides an insightful discussion of the implications of this work by the use of different definitions of fairness, and providing detailed insights on the tradeoffs and factors to consider to address such complex issues.”