Life Sciences and Health Seminar Monika Grewal; Dazhuang Liu

Uncertainty Estimation to Learn Effectively from Clinically Available Data for Organs at Risk Segmentation; Probability distribution based constants optimization in symbolic regression

When
17 May 2022 from 4 p.m. to 17 May 2022 5 p.m. CEST (GMT+0200)
Where
L016
Web
Add

Join Zoom Meeting
https://cwi-nl.zoom.us/j/81535249101?pwd=ZDh3Y1kyZmxBUml1Tm00Rkx0NXBwdz09

Meeting ID: 815 3524 9101
Passcode: 967201

Title: Uncertainty Estimation to Learn Effectively from Clinically Available Data for Organs at Risk Segmentation
Speaker: Monika Grewal

Abstract: Epistemic uncertainty refers to the lack of knowledge in a model about the underlying data. Epistemic uncertainty is higher for out-of-domain samples or in other words the samples for which making a prediction is difficult (hard examples). In this presentation, I will discuss how this concept can be used to effectively learn a deep learning model for automatic segmentation of organs at risk from a large dataset of Computed Tomography (CT) scans containing class imbalance and missing annotations. I will share the basic idea, preliminary results, and challenges.

Title: Probability distribution based constants optimization in symbolic regression
Speaker: Dazhuang Liu

Abstract: Symbolic regression (SR) is a method to discover machine learning models from data that are mathematical equations. In fact, these models are made by composing operators (+, -, * and /), variables (features of the dataset), and constants (0.1, 10, e, Pi). Since SR models are composed of commonly used mathematical operations, they are well suited for human interpretation (important for explainable AI). However, SR is hard because both the structure of the model (i.e., how to compose the operations) and the constant values must be optimized at the same time. Several works concerning this topic have been proposed in the literature, such as combining evolutionary algorithms with gradient descent. However, existing approaches have several limitations. To solve this problem in an efficient and effective way, we are working on a new evolutionary algorithm (EA) to jointly optimize the structure and the constants at the same time. Namely, our approach works by clustering together multiple “structures” that may benefit from similar constants from the population of the EA. Then, for each cluster, a different probability distribution (PD) is modelled and used to sample new promising constant values. Early experiment suggest that the approach is very promising.