I was following, or trying to, a Twitter discussion about models and scenarios. It was – I think – about models that forescast technology development, and you can find it here if you’re interested. I didn’t entirely follow it, but my impression was that the suggestion was that if the scenario on which you were basing your model was unrealistic, then what you infer from your model could be very wrong. For example, if your baseline assumptions don’t properly reflect current policy, then you might infer a greater benefit to some action than is actually likely.
What I didn’t quite get is the reason for this. In many physical models, you can still infer something about how the system will respond to some perturbation, even if the underlying model does not capture the full complexity of the system. Of course, there are limits to this, but it is quite common to use relatively simple models to try and understand how a physical system will evolve.
So, is the problem with these forecast models that the system is so sensitive to the underlying conditions that if these don’t properly represent our current conditions that you really can’t say much about how the system will respond to changes? In other words, is it related to the lack of structural constancy that Jonathan Koomey discusses in this paper.
Alternatively, is it that people are not being clear about the limitations of their analyses? For example, we can’t use climate models to forecast the weather many years into the future, but we can use them to say something about how the climate will probably change if we perturb the atmospheric CO2 concentration by some amount.
I don’t actually know where I’m going with this. I didn’t completely follow the discussion and couldn’t quite tell what the actual problem was. It did seem, though, that the suggestion was that the underlying scenarios had to properly represent our current policy landscape and I found that slightly surprising. I’m much more used to the idea that one can use simple models to try and understand how a system responds to changes, without requiring that the model fully represents the complexity of the system being considered.
My concern would be that if the model is extremely sensitive to the underlying scenarios, then it would seem very difficult to be confident in the model results. As I’ve already said, though, I may have misunderstood what was being suggested, so would be pleased to have this clarified.