Please find below more information about the presentations which were held on 23 November:
Personalized Health: Challenges in Data Science
The promise of personalized health is driven by the wide availability of data, but we don’t need to talk so much about where we want to be, rather
how we should get there. What are the challenges that need to be bridged technologically to unlock the potential in the much greater availability of
data we now have? In this talk we’ll consider three challenges of data science in the context of personalized health, the three challenges each
need to be bridged to bring the era of true precision, or personalized, medicine within the reach of an affordable health care service.
Machine learning and counterfactual reasoning for personalized clinical decision making
Making a good decision involves considering the likely outcomes under each possible action. For example, would drug A or drug B lead to a better outcome for this patient? Ideally, we answer these questions using an experiment, but this is not always possible (e.g., it may be unethical). As an alternative, we can use non-experimental data to learn models that make counterfactual predictions of what we would observe had we run an experiment. To learn such models for decision-making problems, we propose the use of counterfactual objectives in lieu of classical supervised learning objectives. We implement this idea in a challenging and frequently occurring context, and propose the counterfactual GP (CGP), a counterfactual model of continuous-time trajectories (time series) under sequences of actions taken in continuous-time. CGP is developed within the potential outcomes framework of Neyman (1923) and Rubin (1978). The counterfactual GP is trained using a joint maximum likelihood objective that adjusts for dependencies between observed actions and outcomes in the training data. We report two sets of experimental results. First, we show that, unlike classical supervised machine learning methods, the CGP’s predictions are stable to changes in irrelevant characteristics of the training data. In the second experiment, we use data from a real intensive care unit (ICU) and qualitatively demonstrate how the CGP’s ability to answer “What if?” questions offers medical decision-makers a powerful new tool for planning treatment.
Data Efficient Deep Learning for medical Image Processing
Deep learning has been remarkable effective in the large data domain, but it's much more challenging to fit models reliably when there is few or
weakly labeled examples available. This is unfortunately the scenario we are facing when dealing with medical images for which detailed labeling is very expensive. In this talk I will provide some insights into how this can be improved, through e.g. semi-supervised learning, equivariant
convolutions or data augmentation.
Bandit Algorithms: What, Why and How?
Decision making in the face of uncertainty is a significant challenge in machine learning. Which drugs should a patient receive? How should I
allocate my study time between courses? Which version of a website will generate the most revenue? What move should be considered next when
playing chess/go? All of these questions can be expressed in the multi-armed bandit framework where a learning agent sequentially takes
actions, observes rewards and aims to maximize the total reward over a period of time.
The key challenge in all these problems is to carefully balance exploration and exploitation. The framework is now very popular, used in practice by big companies, and growing fast. In this talk I will give a concise, illustrated overview of the challenges involved, explain the most important algorithmic and statistical ideas and discuss practical issues such as scaling bandits to large environments, or keeping exploration under control while avoiding suboptimality.