Leader of the group Life Sciences and Health: Leen Stougie.

The CWI Life Sciences and Health (LSH) group is a group of computer scientists and mathematicians whose research focus is on the analysis and design of models and algorithms as well as their direct application to important challenges in the LSH domain.

On the application side, our present team of researchers has expertise in, e.g., computational genomics, medical informatics, computational phylogenetics, and biological network analysis. On the methodological side, we come from different backgrounds, e.g., computational intelligence, computational data science, and operations research. Methodologically, we develop new theories, models, algorithms and decision support tools, for problems that arise mostly in collaboration with experimental biologists and medical experts. We actively collaborate in projects with academic hospitals, biological and biochemical research institutes, and industry. Click here for more information about our group structure.

The LSH group participates in the INRIA International team ERABLE.

Seminars: The LSH group organizes a biweekly seminar.

Watch our group video to get a glimpse of our activities.




No vacancies currently.


Current events

Life Sciences and Health Seminar Riccardo Guidotti, University of Pisa

  • 2021-10-19T16:00:00+02:00
  • 2021-10-19T17:00:00+02:00
October 19 Tuesday

Start: 2021-10-19 16:00:00+02:00 End: 2021-10-19 17:00:00+02:00


Zoom Meeting
Meeting ID: 884 2492 5173
Passcode: 033944

Title:      Explaining Explanation Methods
Speaker:    Riccardo Guidotti

Abstract:   The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to fulfil their tasks due to their high performance. Unfortunately, the most effective machine learning models use for their decision processes a logic not understandable from humans that makes them real black-box models. The lack of transparency on how AI systems make decisions is a clear limitation in their adoption in safety-critical and socially sensitive contexts. Consequently, since the applications in which AI are employed are various, research in eXplainable AI (XAI) has recently caught much attention, with specific distinct requirements for different types of explanations for different users. In this webinar, we briefly present the existing explanation problems, the main strategies adopted to solve them, and the most common types of explanations are illustrated with references to state-of-the-art explanation methods able to retrieve them.

Short bio:   Riccardo Guidotti is Assistant Professor at University of Pisa. Riccardo Guidotti was born in 1988 in Pitigliano (GR) Italy. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at University of Pisa. He received the PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He is currently an Assistant Professor at the Department of Computer Science University of Pisa, Italy and a
member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. He also won the DSAA New Generation Data Scientist Award 2018. His research interests are in explainable artificial intelligence, interpretable machine learning, quantum computing, fairness and bias detection, data generation and causal models, personal data mining, clustering, analysis of transactional data.


Associated Members



Current projects with external funding

  • Statistical Models for Structural Genetic Variants in the Genome of the Netherlands
  • Algorithms for PAngenome Computational Analysis (ALPACA)
  • Fast, accurate, and insightful brachytherapy treatment planning for cervical cancer through artificial intelligence (Brachytherapy treatment)
  • Distributed and Automated Evolutionary Deep Architecture Learning with Unprecedented Scalability (DAEDALUS)
  • Evolutionary eXplainable Artificial Medical INtelligence Engine (EXAMINE)
  • Fusible Evolutionary Deep Neural Network Mixture Learning from Distributed Data for Robust Medical Image Analysis (FEDMix)
  • Multi-Objective Deformable Image Registration (MODIR) – An Innovative Synergy of Multi-Objective Optimization, Machine Learning, and Biomechanical Modeling for the Registration of Medical Images with (MODIR)
  • Networks
  • Uitlegbare kunstmatige intelligentie (None)
  • Optimization for and with Machine Learning (OPTIMAL)
  • Pan-genome Graph Algorithms and Data Integration (PANGAIA)
  • Transparent, Reliable and Unbiased Smart Tool for AI (TRUST-AI)

Related partners

  • AMC Medical Research
  • CNRS
  • Elekta Limited
  • European Molecular Biology Laboratory
  • Universita di Pisa
  • Xomnia
  • Academisch Medisch Centrum
  • Biomedical Imaging Group Rotterdam
  • Erasmus Universiteit Rotterdam
  • Geneton S.R.O.
  • Heinrich-Heine-Universitaet Dusseldorf
  • Illumina Cambridge
  • Institut Pasteur
  • Leids Universitair Mediach Centrum
  • Univerzita Komenskeho V Bratislave
  • Universiteit Leiden
  • Universitaet Bielefeld
  • Universita' Degli Studi di Milano-Bicocca
  • Universiteit van Tilburg