Multiple simulations best for Covid-19 predictions

Computer modelling used to forecast Covid-19 mortality contains significant uncertainty in its predictions, according to a new study led by researchers at UCL and CWI in the Netherlands. This was described in a news item in Nature on 13 November.

Publication date
13 Nov 2020

Computer modelling used to forecast Covid-19 mortality contains significant uncertainty in its predictions, according to a new study led by researchers at UCL and CWI in the Netherlands. This was described in a news item by Nature on 13 November 2020: 'Simulating the pandemic: What COVID forecasters can learn from climate models'.

The authors of the study, performed for the Royal Society’s RAMP initiative for Rapid Assistance in Modelling the Pandemic, call for a better public understanding of the inherent uncertainty of models predicting Covid-19 mortality rates, saying they should be regarded as “probabilistic” rather than being relied upon to produce a particular and specific outcome.

They maintain that future forecasts used to inform government policy should provide the range of possible outcomes in terms of probabilities to provide a more realistic picture of the pandemic framed in terms of uncertainties. 

In the study, currently available as a preprint (https://www.researchsquare.com/article/rs-82122/v3), the researchers sought to determine the level of uncertainty in predictions made by CovidSim, a code developed by Professor Neil Ferguson’s team at Imperial College London. 

Using a powerful supercomputer located in Poland, they adopted a technique, now standard in weather forecasting and climate science, of performing a large number (known as ensembles) of simulations with varying sets of input parameters. Inputs include the assumed effectiveness of proposed medical interventions, factors relating to how Covid-19 is spread, and factors relating to population distribution in the UK. 

The research team found that adjustments to the input parameters were amplified by up to 300% in the outputs (i.e. the predictions) – meaning that slight differences in, say, the assumed effectiveness of social distancing could lead to larger changes in the simulation’s predictions. This is important as the inputs – the knowledge of the state of the pandemic and behaviour of the population – have a significant degree of uncertainty themselves.

They also found that, although the code contained 940 parameters, 60 were important and, of those, only 19 dominated the variance in the output predictions. Half of the overall variation in their results was down to just three of the 940 input parameters (the disease’s latency period, the delay in an infected person self-isolating, and the effectiveness of social distancing).

The researchers also noted that predictions of Covid-19 deaths in the influential Report 9 issued in March were about half the number of deaths that actually occurred in UK, when looking at scenarios that matched the Government interventions most closely. Report 9 is often seen as having influenced the UK Government’s decision to impose a lockdown in March.

Professor Peter Coveney (UCL Centre for Computational Science), who leads the EU VECMA programme on uncertainty quantification (www.vecma.eu) that undertook the study, said: “There is a large degree of uncertainty in the modelling used to guide governments’ responses to the pandemic and this is necessary for decision makers to understand.

“This is not a reason to disregard modelling. It is important that these simulations are understood in terms of providing a range of probabilities for different outcomes, rather than a single fixed prediction of Covid-19 mortality.”

“Because of this uncertainty, future forecasts of the death rates of Covid-19 should be based not on an individual simulation, but on lots of different simulations of a code, each with slightly adjusted assumptions. Predictions based on this method, though still highly uncertain, will provide a more realistic picture of the pandemic.”

Professor Coveney added: “Our findings are important for government and healthcare policy decision making, given that CovidSim and other such epidemiological models are – quite rightly - still used in forecasting the spread of COVID-19. Like predicting the weather, forecasting a pandemic carries a high degree of uncertainty and this needs to be recognised.”

“Finally, our modelling has only been possible because Professor Neil Ferguson’s team open sourced their code. Not all models being used in Government briefings are open in that way. We urge other research groups to follow Imperial’s lead and adopt an open science approach.”

To test the robustness of CovidSim, the research team – after discussions with the Imperial College team -- selected 60 of the most critical input parameters and adjusted them by increments of up to 20%, analysing how these adjustments affected the predictions. 

CWI research on uncertainty quantification

Within the VECMA project, CWI researcher Wouter Edeling from the Scientific Computing group is a specialist in quantifying modelling uncertainties. As such, he made part of the software for the VECMA toolkit to couple a specific uncertainty quantification (UQ) technique to the well-known epidemiological model CovidSim from Neil Ferguson of Imperial College (UK).

Edeling says: “For models with a high number of parameters like CovidSim, it is very difficult to study which effect uncertainties in the input parameters have on uncertainties in the output. Unfortunately, the ‘curse of dimensionality’ reigns: having many parameters means that the computational costs will be inordinately high. We investigate how to do the computations as efficiently as possible, by finding out which parameters matter most for the output uncertainties. By focusing on these parameters it becomes possible to make good probabilistic predictions, which can be used by governments for their decisions.”

Partners

The study was carried out by researchers at UCL, at the Scientific Computing group of CWI, which is led by Daan Crommelin, the University of Amsterdam, Brunel University London, and the Poznan Supercomputing and Networking Centre in Poland.

The work was funded as part of the European Union Horizon 2020 research and innovation programme.

 

More information

 

Picture: the VECMA Consortium in 2019 in Amsterdam. Picture: VECMA.

 

Source: Press release UCL London
Source 'CWI research on uncertainty quantification': CWI