top of page

Numerical models, the butterfly effect and COVID-19

Emma Willett dives into the usage of computer models and whether the models produce accurate enough results


COVID-19 and the associated lockdown measures have dominated our news for much of the last year. Among many new things brought to light by these strange and uncertain times, the use of computational modelling in modern science has hit the headlines. No longer solely of interest to researchers spending long hours illuminated by the glow of computer screens filled with endless lines of code, these models have been influential in the UK government’s policy decisions throughout the pandemic.


Doing science on a computer is now common practice across a wide range of disciplines, including medical research. In silico modelling complements the more traditional in vitro (experimenting in glass) and invivo (experimenting within a living organism) approaches and informs clinical decisions about everything from drug design to surgery. It is also used to describe and predict the spread of disease in a population and the consequences of any intervention.


It is important to remember, however, that the information derived from a model is only as good as the model itself and presenting results without context can be extremely misleading. In the case of COVID-19 spread, where model predictions are a major driver behind the rules and restrictions which affect us all, the consequences are severe.


Some experts have raised concerns about the precision with which the results of these models are reported, suggesting that they are misrepresented as definite forecasts and should be treated with more caution. The need for care is twofold. First, making reliable predictions from any model requires a long, iterative process of comparing observational data, identifying weak areas of the model and implementing improvements. Therefore, models are constantly updated as more data becomes available and, while this is a natural part of the scientific process, the changing predictions can be confusing to the public or deliberately distorted for political gain.


Second, uncertainty is intrinsic to the problem of epidemic spread because it is characterised by exponential growth. This means that small differences in the initial conditions are rapidly amplified, and the same model predicts a wide range of outcomes. Commonly referred to as the butterfly effect, this feature of exponential systems was initially described in relation to weather forecasting more than 50 years ago. Consequently, the reliability of models decreases rapidly as you look further into the future: while we may be able to predict what will happen in the next week-or-so, the possible predictions of the model diverge as we move to the medium or long-term. Therefore, the real value of these models does not lie in sweeping predictions and definite statements, but in comparing the relative probability of different outcomes.


Ultimately, computational modelling cannot tell us how to solve the problems we face today. Though models can guide us by revealing the probable outcomes of our actions, hiding their limitations or overstating their results leads to poor decisions and a loss of confidence in otherwise sound science. Simplifying complex and nuanced results into black and white statements is harmful, and it is time to present the public with the inherent uncertainties. The idea is not so alien, we see the same thing every day in the weather forecast as the percentage chance of rain or reduced detail in the long-range projection. Perhaps it is time for a COVID forecast, showing all the shades of grey, to help everyone understand the science behind the policy.

From COVID-19 mini issue, 2020

3 views0 comments
bottom of page