The Virtual Tumour, part III – model error versus initial condition error
Written by David Orrell
In a 1904 paper on weather forecasting, the Norwegian physicist and meteorologist Vilhelm Bjerknes noted that the accuracy of a predictive model depends on two things: “1. A sufficiently accurate knowledge of the state of the atmosphere at the initial time. 2. A sufficiently accurate knowledge of the laws according to which one state of the atmosphere develops from another.” Or in other words, the initial condition, and the model itself.
The same holds true for dynamical models of any kind, including the Virtual Tumour. To make a successful prediction of tumour growth, we need to have the right data about the initial state of the tumour, the treatment, and so on; but the model itself also has to be accurate.
Historically, modellers have tended to focus on the first kind of error, in part because it is easier to think about. Indeed, perhaps the most famous theory related to forecasting is the butterfly effect (it even had a movie named after it). This of course says that we can’t predict the weather because an effect as small as a butterfly flapping its wings can cause a storm to occur on the opposite side of the world. In general, any small error in the data used for the initial condition will magnify exponentially.
As I and others have argued elsewhere, the butterfly effect is a nice story but doesn’t stand up to close analysis. Weather models may be chaotic, but they aren’t that chaotic. The reason they go wrong is primarily because the weather system is very difficult to approximate using sets of differential equations. In other words, it is model error, rather than initial condition error.
But model error is much less tractable than initial condition error. It is easy to say that the model went wrong because the data was used to initialise it were wrong. The equivalent for model error would be to say that some parameters in the equations were a bit off. But what if the entire structure of the model is wrong? What if there is a linear term where there should be a quadratic term – or if there is no suitable term at all?
According to Stephen Wolfram’s principle of irreducible complexity, a complex system – which would include the atmosphere, or a living cell, or a growing tumour – can’t be reduced to equations in the first place. The best that we can ever do is to come up with a rough approximation.
So when predictions go wrong, the temptation is to look at the initial condition error, when model error is the more likely culprit. This allows researchers to make ambitious claims for predictability, even when the models fail to deliver.
As an example, one survey paper of models of the tumour immune microenvironment states that, “Computer models provide large-scale predictive power by allowing us to simulate clinical trials with sufficient details to study response to various conditions. Using these models, it is possible to test and predict drug failures in simulations rather than in patients, which could result in improved drug design, reduced risks and side effects, and can dramatically decrease costs of drug development.”[1]
This “large-scale predictive power” sounds great. Unfortunately, the assertion isn’t backed up by the examples given, but as the paper explains: “The main challenge in the way of predictive models for virtual clinical trials is the availability of input data for the model for each patient. Detailed knowledge about the situation at the start of the simulation can significantly affect the predictive power of that model.” It’s not the model, it’s the data.
This lack of interest in model error is why researchers continue to develop increasingly elaborate models, without checking how robust the model actually is. These models are useful for exploring a system and visualising how it might work, but their complexity quickly escalates. As an example, the paper “A computational multiscale agent-based model for simulating spatio-temporal tumour immune response to PD1 and PDL1 inhibition” by Gong et al. proposes a 3D agent-based model of a tumour. In the model, cancer cells are described by 11 parameters, however the model does not include information about cell phase. T-cells are described by another 12 parameters. There are also many other parameters to describe things like the environment, the behaviour of the immune response, and so on.
To compare with the VT, a main focus in our approach is limiting the number of parameters, especially for things we can’t measure directly or have little effect on predictions. For this reason, most parameters in the VT model concern the cell phase and the drug effect in each phase.
As Gong et al. note of their model: “At its current stage, the model can qualitatively capture characteristics of a spectrum of cancers and is able to assess and compare predictive biomarkers in a semi-quantitative manner. However, at this stage it is not calibrated to any particular type of cancer, which prevents it from generating predictions that can directly be used in clinical practice.”
Again, this highlights the tension between making predictions, and making movie-like simulations. The Virtual Tumour does not claim to be a perfectly accurate model of a growing tumour, because no such thing exists. As mentioned in the first post in this series, it was developed, not by the usual path of making a simple model more complicated, but by making a complicated model more simple, on the basis that simple models are better at prediction. The next post will argue that complexity-based techniques such as machine learning may be leading us to a similar conclusion.
[1] “Multiscale Agent-Based and Hybrid Modeling of the Tumor Immune Microenvironment” by Norton et al.