A modelling manifesto?

There’s a recent Nature comment lead by Andrea Saltelli called Five ways to ensure that models serve society: a manifesto. Gavin Schmnidt has already posted a Twitter thread about it. I largerly agree with Gavin’s points and thought I would expand on this a bit here.

The manifesto makes some perfectly reasonable suggestions. We should be honest about model assumptions. We should acknowledge that there are almost certainly some unknown factors that models might not capture. We should be careful of suggesting that model results are more accurate, and precise, than is actually warranted. We should be careful of thinking that a complex model is somehow better than a simple model. Essentially, we should be completely open and honest about a model’s strengths and weaknesses.

However, the manifesto has some rather odd suggestions and comes across as being written by people who’ve never really done any modelling. For example, it says

Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing.

How can the above possibly be implemented? Who would get to decide if a modeller projected more certainty than their model deserved and what would happen if they were deemed to have done so? Similarly, how would we prevent politicians from offloading accountability to models of their choosing? It’s not that I disagree with the basic idea; I just don’t see how it’s possible to realistically enforce it.

The manifesto also discusses global uncertainty and sensitivity analyses, and says

Anyone turning to a model for insight should demand that such analyses be conducted, and their results be described adequately and made accessible.

Certainly a worthwhile aspiration, but it can be completely unrealistic in practice. If researchers get better resources, they often use this to improve the model. A consequence of this is typically that there is then a limit to how fully one can explore the parameter space. A researcher can, of course, choose to make a model simpler so that it is possible to do a global uncertainty and sensitivity analysis, but this may require leaving out things that might be regarded as imortant, or reducing the model resolution. This is a judgement that modellers need to make; do they focus on updating the model now that the available resources allow for this, or do they focus on doing global uncertainty and sensitivity analyses? There isn’t always a simple answer to this.

We could, of course, insist that policy makers only consider results from models that have undergone a full uncertainty and sensitivity analysis. The problem I can see here is that if policy makers ignore a model for this reason, and it turns out that maybe they should have considered it, I don’t think the public will be particularly satisfied with “but it hadn’t undergone a full uncertainty and sensitivity analysis” as a justification for this decision.

I don’t disagree with the basic suggestions in the manifesto, but I do think that some of what they propose just doesn’t really make sense. Also, the bottom line seems to be that modellers should be completely open and honest about their models and should be upfront about their model’s strengths and weaknesses. Absolutely. However, this shouldn’t just apply to modellers, it should really apply to anyone who is in a position where they’re providing information that may be used to make societally relevant decisions. I don’t think hubris is something that only afflicts modellers.

Advertisement
This entry was posted in ClimateBall, Philosophy for Bloggers, Research, Scientists, Sound Science (tm), The philosophy of science, The scientific method and tagged , , , , . Bookmark the permalink.

73 Responses to A modelling manifesto?

  1. brigittenerlich says:

    It might have been a good idea for the writers of this article to engage in dialogue with modellers perhaps, before telling them what they should, indeed must, do? And it might have been a good idea for Nature to be a bit more circumspect in what they publish?

  2. Brigitte,
    Indeed. I don’t think one group of researchers telling another what they must do is ever a particularly good idea. I suspect Nature probably likes the controversy, unfortunately. Admittedly, it does remind my of a common saying in Astronomy: if it’s published in Nature, it’s probably wrong 🙂

  3. jamesannan says:

    That’s not a common saying *in astronomy* 🙂

  4. Yes, I suspected it applied beyond just astronomy.

  5. JCH says:

    Shoot. Have I wasted an entire month writing a “back of an envelope” manifesto?

  6. If I were to write a modeling manifesto, it would be in the context of not necessarily prediction but of being able to extract the patterns of what is ostensibly not visible — in other words, recovering the patterns due to invisible forces that occur spatially or temporally. That’s what I cut my teeth on, and based on people that frequent this blog that seems typical. For example, that’s the rationale for the oil geologist that is trying to determine what lies underground based on various probe measurements that don’t reveal the contents directly. And that’s a significant emphasis of astrophysicists who are dealing with data that may be sparse and weak. In both cases,you can’t “see” anything directly. Many times this is referred to as inverse physics or reconstruction physics, in that this kind of modeling is revealing the structure or the origin of the behavior that is not directly viewable or understood. It really is abut trying to determine what the unknown factors are and not always primarily about prediction.

    I think that’s what’s missing in most of the criticisms of modeling — never fails that someone will demand a future projection, even though there are ways of cross-validating the understanding based on the data that is already available. Have to get that understanding first and not have to wait decades to get validation of prediction results.

    Now what’s disturbing about modeling based on machine learning, is that even though it may get the right result, it still may not explain the origin or structure underlying the data. So until ML does the full explanation the results of ML are essentially “put your blinders on and just do this”

  7. jeangoodwin says:

    “Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing.”

    The language of “must” is unfortunate, because it does suggest the need for an enforcement mechanism–“you must do this, or else [what?].” But I thought the “musts” were just an adaptation to the genre of manifestos, which requires a confrontational stance and a call to action. (That’s from a paper by a friend of mine: Environmental Manifestos, https://www.jbe-platform.com/content/journals/10.1075/jaic.18036.rod).

    I thought this point in particular was primarily articulating well-established ethical principles. If so, no extra enforcement mechanism is needed. Ethical breaches by modellers & politicians get punished by criticism, bad reputation, bad conscience (one hopes), etc.

    Why bother to state ethical principles that are well known in the modelling community? Maybe because modellers are not the only or even primary audience of the “manifesto.” I can see using this essay in an undergrad class as a discussion-starter, for example.

  8. dikranmarsupial says:

    Erm… doesn’t Prof. Pielke jr frequently complain about discussions being policed and consensus enforcement? Who exactly does he want to police “ and politicians must not be allowed to offload accountability to models of their choosing.”? I read that as a requirement on modellers to call out politicians cherry picking? Who else is exper enough for the job?

  9. Jean,
    Yes, it is possible that modellers are maybe not the only audience. I have often thought that we should maybe spend a bit more time explaining the scientific method to policy makers (so that they can better understand when to trust something, and when to be a bit more skeptical) rather than trying to explain how researchers/scientists/modellers should engage with policy makers. It did, however, seem that the manifesto focused quite a lot on what modellers should do and didn’t maybe consider that modellers already mostly know this and there are reasons why they don’t always follow these principles. To be fair, I’m aware of modellers who I think trust their models too much. I’m aware of models that I think have too much complexity. I’m aware of papers where uncertainties weren’t properly highlighted. However, modellers are human and some are more confident that they should be and some are less confident than they should be. There’s also room for disagreement.

  10. Dikran,
    As you probably are aware, Prof. Pielke’s position in this context is not always entirely consistent (in my opinion, of course).

  11. brigittenerlich says:

    Ken, yes, when I read the core commandments to some modellers, they just shook their heads and said ‘we know all this’, it’s difficult etc…. That’s why I think some dialogue with real modellers might be useful, public engagement etc. etc.

  12. Brigitte,
    Indeed, I think it would be very useful if there was more dialogue between those who critique how we do research and the researchers they’re critiquing. One potential issue might be that some seem to regard themselves as observers who should avoid biasing their observations by engaging with their subjects.

  13. dikranmarsupial says:

    ATTP consistently directional? ;o) (IMHO)

  14. I just looked through the author affiliations and I can’t find anyone who definitely does modelling. There was an economist who might have done some modelling. There was someone involved in epidemiology who might have done some modelling. Most, however, seemed to be in disciplines that aren’t typically associated with doing modelling (at least, not what I would regard as modelling).

  15. Willard says:

    FWIW, Deborah’s a philosopher of science who specializes in philosophy of statistics:

    https://www.phil.vt.edu/dmayo/personal_website/

    I don’t see any specialist in hubris.

  16. I realise that I may be sounding like I’m suggesting that people who don’t do modelling shouldn’t comment on modelling. I don’t think that at all. However, I do think it’s important to understand why modellers may make certain choices. My general impression is that this manifesto fails to recognise that how we might want to do modelling doesn’t always survive the actual realities of doing modelling.

  17. brigittenerlich says:

    This manifesto about making people engage in ‘responsible modelling’.
    One form of responsible research and innovation is called AREA endorsed by the ESRC:

    “Anticipate – describing and analysing the impacts, intended or otherwise, (for example economic, social, environmental) that might arise. This does not seek to predict but rather to support an exploration of possible impacts and implications that may otherwise remain uncovered and little discussed.

    Reflect – reflecting on the purposes of, motivations for and potential implications of the research, and the associated uncertainties, areas of ignorance, assumptions, framings, questions, dilemmas and social transformations these may bring.

    Engage – opening up such visions, impacts and questioning to broader deliberation, dialogue, engagement and debate in an inclusive way.

    Act – using these processes to influence the direction and trajectory of the research and innovation process itself.”

    When you look at that ‘manifesto’ of RRI you can see that you should only ACT or INDEED tell somebody else to ACT in certain ways (in this case tell people to change their behaviour), if you first have anticipated, reflected, and engaged – which itself entails broader deliberation, dialogue and debate in an INCLUSIVE way, i.e. include those people you think should do modelling more responsibly. What one could call responsible manifesto writing….

  18. Brigitte,
    Interesting. I often get the sense that those who comment on research practices think that they’re somehow outside of this framework. Given that we’re all academics, I don’t really see that this is the case. Any suggestion that researchers should behave in some different way should apply to those making the suggestion as well as to those they’re targetting.

  19. Joshua says:

    jean –

    > The language of “must” is unfortunate, because it does suggest the need for an enforcement mechanism–“you must do this, or else [what?].” But I thought the “musts” were just an adaptation to the genre of manifestos, which requires a confrontational stance and a call to action….

    The “musts” take place in a context. It’s ironic that the authors spell out “musts” for modellers – but avoid a “must” (from my perspective) – which is to lay out the context in which they are describing the “musts.”

    That was probably a word salad. Let me try to work it out some more (hopefully not making it worse): This article comes from within a political context where there are a lot of people who are pointing fingers at “modellers” for scare-mongering in an illegitimate effort to enforce “draconian” measure to “lockdown” society, either out of “panic” or out of any variety of nefarious motivations (including in the States, an effort to hurt Trump’s political fortunes).

    So there’s a lot of scapegoating and motivation impugning directed toward modellers. And into this context arrives a manifesto written about what modellers “must” do – with an obvious implication that in fact, “modellers” (in an unspecified %) don’t currently do all these things that they must do.

    That doesn’t mean that the authors shouldn’t lay out a list of what they think is important for modellers to do. And it’s just fine to engage in a discussion of whether or not modellers as a group fail to meet those criteria as much as they should.

    But if you’re going to lay out a rhetorical frame of “musts” with the implication that there is some broad failure to comply with that list, within a particular political context, then you should be explicit about your approach to that context. In other words, you “must” be specific as to all those questions of uncertainty, so someone can reality check your angle into the discussion.

    Is there not an irony there?

    In particular, I love the irony of many people who are attacking the very enterprise of modelling in the context of Covid-19 – some of whom have years of practice in attacking modellers (and academics more broadly) in a wide range of topics including, notably, climate science – who have nothing critical to say about Trump constantly claiming that he’s saved millions of lives with his decisive action (the amount of millions is ever-escalating) – a claim that is obviously based on the work of modellers.

    Nothing new in that kind of hypocrisy – but it serves well to illustrate why it is inappropriate, IMO, for people to criticize modelling in this context without actively and explicitly engaging in an interrogation of the context.

  20. Joshua,
    You’ve articulated that better than I could have, but I did ponder the irony of a manifesto that highlights that modellers need to be conscious of how they frame their models that didn’t really seem to give much thought to the implications of how they’d framed their manifesto.

    Something else I wondered was how some of those on the author would respond if a modeller was severely censured for projecting more certainty than their model deserved. Would they regard it as appropriate or would they suddenly play the academic freedom card?

  21. “Pandemic politics highlight how predictions need to be transparent and humble to invite insight, not blame.”

    Seriously, to those who were authors AND based in the US, our homeland screwed up really badly, is still screwing up very badly, our homeland only has one modeler, Small Hands, and that modeler is not listening to what anyone has to say about anything.

    To live in the US, to see others from the US preaching about models, the so-called pandemic politics, Small Hands, Festivus and the Airing of Grievances, my head hangs so very low right now, have they no shame? 😦

  22. From reference 15 of the SOM …

  23. jeangoodwin says:

    ATTW, One audience I have in mind for the Manifesto is undergrads. I’ve been wanting for a while, and even more since March, to put together a course called something like Modelling: Critical Thinking & Communication. Entry level, larger enrollment. Non-STEM majors would learn about the kinds of questions they should be asking to probe models that they encounter, used or abused, in policy arguments. STEM majors would learn how to communicate what they know to nonspecialist audiences–which basically means answering all those questions in advance. By the end, everyone would be able to use words like “sensitivity” and “boundary conditions” a bit more cogently.

    But I haven’t gone forward with this, since I’m missing resources: in addition to things like a modeller-colleague to co-develop the course and some “spare time”, there aren’t a lot of readings/tools/resources that would work. The Manifesto would–it’s on an issue that students will recognize for at least a few years, it’s written at the intelligent layperson level, it pretty much says some things that are well known (to me, that’s the biggest critique of the piece) in vivid language, and it has a couple of claims so questionable that a bright undergrad will call them out. Which is as it should be, since critical thinking is an aim of the course.

    What resources would y’all suggest? They need to:
    – stick with the big picture, not your fields’ latest squabbles
    – be decision-relevant in some way
    – mostly fall within US undergraduates’ background knowledge, and if there are technical sections, they need to be cut-able without too much harm
    – overall, represent various approaches to modelling in diverse disciplines
    – short! and as my students say, “fun”

  24. Willard says:

    Jean,

    Nice to see you here.

    You might like Eric’s book:

    https://andthentheresphysics.wordpress.com/2019/01/29/erics-memes/

    You can contact him over the Twitter, where he’s fairly critical of how modelers dealt with the pandemic. He also liked the manifesto.

    Cheers.

  25. angech says:

    We should be honest about model assumptions.
    We should acknowledge that there are almost certainly some unknown factors that models might not capture.
    We should be careful of suggesting that model results are more accurate, and precise, than is actually warranted.
    We should be careful of thinking that a complex model is somehow better than a simple model.
    Essentially, we should be completely open and honest about a model’s strengths and weaknesses.

    Except?

    I find this all very confusing.
    Doubt and uncertainty plague us all and models help to alleviate it in a small way.
    Now they are saying and you are agreeing that a small dose of skepticism is needed?
    Why now?
    Why has this part of the message been left in the dust for so long?-

    Just a small dose of humility instead of hubris helps everyone.

    I agree with ATTP though that it is better not to use the word must.
    Telling people they must do something is a sure way of ruining any discussion fostering both rebellion and disbelief.

  26. izen says:

    I agree with Joshua on this. The context in which the ‘Must’ injunctions are made have implications.
    Further than that, they allude to aspects of how society treats scientific information from models or research that carry significant ideological baggage.

    Take the quote ATTP highlights,-

    “Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing”.

    The first half of this carries the clear implication that modellers are prone, at risk, and may frequently project more certainty than their models deserve.
    It is very close to a ‘Husbands must not beat their Wives.’ type statement.

    The evidence that modellers, (as opposed to the media and politicians) are prone to projecting more certainty than their models deserve is profoundly lacking.
    It is a veiled accusation without supporting evidence.

    While the second half,-
    ‘politicians must not be allowed to offload accountability to models of their choosing.’
    Addresses a problem which when extended beyond politicians to any powerful group with economic or ideological biases, is widely recognised as a common and pernicious failing in our public discourse.
    Whether it is on climate models, EEA, or COVID19.

  27. Jean,
    Interesting sounding course. Willard’s suggestion is a good one. Off the top of my head, I can’t think of any suitable resources. Some of what the manifesto suggests seems perfectly fine, so would probably be a decent resource for course like that. Might be an interesting thing to interrrogate a bit in such a course. For example, why might modellers not always carry out full uncertainty and sensitivity analyses? Could we – in an academic environment – enforce some kind of set of rules about how models should be used? If we could, should we?

  28. Ben McMillan says:

    I find the approach of the OP bending over backwards here to give credit to these authors is odd. The article reads like the usual axe-grinding (basically agreeing with Joshua+Izen). Models getting a result you don’t like doesn’t mean the whole edifice of science is collapsing. And the idea that making decisions under uncertainty is ‘post-normal science’; well, welcome to the human condition.

    I think ‘there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers’ is misleading and unreasonable (not even wrong). In fact, many are perfectly well-known enough for the purposes of public policy. You could more reasonably argue that in March, policy needed better estimates of, say R0, but even that is a bit of a stretch.

    If something has an uncertainty bar attached, that doesn’t mean it is a ‘known unknown’. Just because some contrarian has a hugely different answer (say of mortality) doesn’t mean all bets are off. It just means you need some way of combining and filtering information.

    The introduction is weird, because it conflates individual uncertainties of estimates with the variation between different groups/models estimates. And the politicians choosing the result they like, and ignoring error bars has nothing much to do with modelling, and can’t be fixed by modellers.

    None of the ‘5 ways’ suggested appear to be relevant to the ‘problems’ they point out in the introduction. E.g., quantifying uncertainty properly gives you a wider error bar, which is not exactly helpful if, like the authors, you find imprecise estimates offensive.

  29. Ben,

    I find the approach of the OP bending over backwards here to give credit to these authors is odd.

    It’s one of my many failing 🙂

  30. Chubbs says:

    A manifesto for model critics: “critics must not project more certainly than their expertise deserves” or “anyone turning to a model critic for insight should demand an analysis of the critic’s understanding and reaction to the model results”

  31. dikranmarsupial says:

    Ben writes “ E.g., quantifying uncertainty properly gives you a wider error bar, which is not exactly helpful if, like the authors, you find imprecise estimates offensive.”

    Or if you are happy to exploit statements of uncertainty to provoke and cause mischief, e.g. Prof. Pielke’s How Many Findings of the IPCC AR4 WG I are Incorrect? Answer: 28%. If the IPCC were thought that all of the predictions would pan out, there would be no reason to make them probabilistic (and indeed take great care in standardising and explaining there method of presenting uncertainties). Not very “honest broker” IMHO.

    The error bars on the CMIP projections are likely to under-represent the true uncertainties, but (i) they are already broad enough to encompass the observations and (ii) how would broadening the uncertainties affect policy (uncertainties go in both directions but the loss is super-linear, so the current error bars are more likely to understate the need for action on climate change).

  32. Dikran,
    Yes, that’s one of Roger’s classics.

  33. dikranmarsupial says:

    Yes, someone that demonstrates such a fundamental lack of understanding of probability and uncertainty (especially in the comments) really shouldn’t be putting their name to a manifesto telling modellers to be more careful about uncertainties. I would hope his co-authors would raise an eyebrow or two had they seen that blog post!

  34. The waybackmachine version of Roger’s post includes the comments where James tries to explain the basics to Roger. For example, if I say that if I roll a 6-sided die I will *probably* get between a 1 and 5, my statement isn’t incorrect if I roll a 6. In some sense, if the IPCC presented one-sigma uncertainties on all their estimates, they would be more wrong if a lot more than two-thirds of the time the actual result lay within their uncertainties, than if around one-third of the time it ends up outside their uncertainty ranges.

  35. dikranmarsupial says:

    Comment 16 is probably the worst:

    16. Roger Pielke, Jr. said…
    Over at James Annan’s blog he provides a telling statement:

    “The obvious elephant in the room that Roger cannot bring himself to acknowledge is that the statement is correct irrespective of the outcome of the roll.”
    http://julesandjames.blogspot.com/

    In other words, by equating IPCC predictions with rolls of a die, he is implying that whatever happens in the real world, the IPCC is correct, about everything.

    Infallibility doesn’t even work for the Pope.
    Fri Aug 12, 08:01:00 AM MDT

    This is an obvious adversarial non-sequitur. If you use a thought experiment to illustrate a particular point that doesn’t mean the two situations match exactly. In this case with a fair die there is only aleatory uncertainty but no epistemically uncertainty, and nobody is making that claim about climate. Roger was missing the point that he might reasonably suggest that 28% of the IPCCs projections may not pan out, but that doesn’t mean that 28% of their findings are incorrect. Sadly some people like to dish out criticism, but are highly resistant to it when applied to them, but that is human nature and to be expected.

    Overstating you opponents position or over extending their arguments seem familiar from Schopenhaurs list of stratagems…

  36. brigittenerlich says:

    Jean, Some books to look at regarding modelling – old stuff – but some might still be useful
    Harré, R. 1960. Metaphor, model, and mechanism. Proceedings of the Aristotelian Society 50:101-22.
    Harré, R. 1970. The principles of scientific thinking. London: Macmillan.
    Hesse, M.B. 1966. Models and analogies in science. Notre Dame, IN: University of Notre Dame Press.
    Hughes, R.I.G. 1997. Models and representation. Philosophy of Science 64:325-36.
    Ravetz, J. 2003. Models as metaphors. In Public participation in sustainability science: A handbook, ed. B. Kasemir , J. Jäger , Carlo C. Jaeger , and M. T. Gardner , with a foreword by W. C. Clark and A. Wokaun. Cambridge, UK: Cambridge University Press.
    Wartofsky, M.W. 1979. Models: Representation and the scientific understanding . Dordrecht: D. Reidel.
    Yearley, S. 1999. Computer models and the public’s understanding of science: A case-study analysis. Social Studies of Science 296:845-66.

  37. brigittenerlich says:

    Oh, and this, which I have only just discovered https://essaysconcerning.com/2015/05/24/in-praise-of-computer-models/

  38. There are also a series of posts on Bryan Lawrences’s blog about models. They start here. Bryan has a background in meteorology and climate, and manages the delivery of Advanced Computing (HPC, HPC support, and Data Centres) for the UK National Centre for Atmospheric Science (NCAS).

  39. jamesannan says:

    I knew I was making the right decision not to bother getting involved with this 🙂

    My Pielke Prior scores again!

  40. Willard says:

    These two entries also contain references. The first is on the use of models in science:

    https://plato.stanford.edu/entries/models-science/

    There is a big bibliography and some real examples at the end.

    The second is on computer simulations in science:

    https://plato.stanford.edu/entries/simulations-science/

    It’s written by Eric.

    For an entry point on what is a model, besides Mary Hesse’s work that Brigitte suggested earlier, I rather like Max Black’s Models and Archetypes, but it’s hard to get.

  41. Thanks Brigitte for the shout out for my essay.

    The manifesto says a number of things that are obviously in the mind of experienced modellers (like, to consider sensitivity analysis), it seems naive at best, or thoroughly condescending. My wife and I often call out SOTBO! (when a Statement Of The Bleeding Obvious is uttered). The manifesto is full of SOTBOs

    ATTP, you mentioned about some fields not using modelling – I am curious if there really are any that don’t use modelling of some sort. Maybe not always computing sets of interacting differential equations, but using computers to assist in some way in some parameter space, to develop a testable evolution of some system.

    The best part about science in my view, when looking at its history, is the posing of interesting questions; and then of course using ingenuity or brute force to try to get to an answer.

    As an illustration, there is a an interesting question: ‘at what level of renewables penetration, does the need for serious levels of energy storage kick in?’ … a bit vague maybe, but I found a paper that asked essentially this question; as Ken Caldeira summarises here:
    https://kencaldeira.wordpress.com/2018/03/01/geophysical-constraints-on-the-reliability-of-solar-and-wind-power-in-the-united-states/
    The model is deliberately very simple, so includes a lot of simplifying assumptions (e.g. “an ideal and perfect continental scale electricity grid, so we are assuming perfect electricity transmission”). When would a scientist not state their assumptions? Anyway, the answer is about 80% penetration.

    I had a twitter exchange with Christopher Clack who is doing very detailed modelling on the transition to clean energy – including realistic grid etc. – and he said he’d found that the 80% result does seem to be quite fundamental; at different scales too (I understand that with falling storage costs, the last 20% could be in reach by mid century, to get us to 100% clean energy, but 80% is a huge win).

    On climate models, the question ‘What is ECS?’ is an important but narrow question with a uncertainty range that infuriates some, and may never be resolved to their satisfaction. AR5 (2013) stated a “medium confidence that the ECS is likely between 1.5 °C and 4.5 °C”. I don’t find the lower bound reassuring; in fact, I find the uncertainty range quite scary. As a politician, do I plan based on 1.5 °C or 4.5 °C value? Models can’t answer that question.

    The greater granularity and breadth of the models (e.g. AgMIP) has enabled/ is enabling other interesting and important questions to be answered: are heat waves becoming more frequent in the UK (and how much so by 2050, 2100)? how fast might the southern third of Greenland melt? Is wheat production in USA seriously threatened over next 30 years? what is the likely level of migration for worst case scenarios by 2100? … These are questions that are easier to communicate and with more obvious implications than abstract global measures.

    Interestingly, with the Covid-19 pandemic, people are having to navigate risk and uncertainty (some better than others, obviously). People are having to think both fast and slow right now. Maybe this is the training course we all need to better understand what the IPCC means for us.

    Surely the injunction to every scientist is: Ask interesting questions!
    (and models are ‘merely’ a tool in helping to arrive at *an* answer) …

    I didn’t see that in the manifesto.

  42. Richard,
    Yes, there are a number of suggestions that just seem bleeding obvious. Modellers are human, so are no more, or less, honest than any other group of people. They’re mostly aware of these issues. Some are better at highlighting model weaknesses and strengths then others, but there’s no reason to think that modellers are less open and honest than any other group of researchers. The problem with framing the manifesto as they did is the implication that these are real problems that need to be addressed, rather than it simply being a set of things that most modellers are already aware of, are doing their best to meet, that meeting these ideals is more difficult than it may seem, and that there may well be valid disagreements about some of these issues.

    ATTP, you mentioned about some fields not using modelling – I am curious if there really are any that don’t use modelling of some sort.

    Yes, when I wrote that I did wonder if someone might query this. I was thinking of the kind of models that I suspect the manifesto was referring to (computational models) but there probably are very few fields (if any) that do modelling of no kind. It’s hard to know where to draw the line between something that’s a model and something that is not.

  43. Richard,
    Actually, your comment reminded me of something else I was thinking. As you say in the end, science is about asking interesting questions. This often involves modelling, and – as Gavin Schmidt highlighted – the interesting questions often involve models for which there is just enough resource available (i.e., if you wanted to do a full uncertainty/sensitivity analysis, you might not be able to try to answer the interesting question).

    Of course, this doesn’t stop governments from requiring, for example, that we have a climate model, or an epidemilogical model, that has undergone this full testing. However, this feels more like a group of modellers providing a resource for a client, than a group of modellers trying to answer interesting questions at the cutting edge of science.

  44. Joshua says:

    > It’s hard to know where to draw the line between something that’s a model and something that is not.

    Yes. Along those lines I have a question – which again goes back to what I see as an ironic twist to the article. What kind of understanding in this kind of situation doesn’t rely on modeling? AFAIAC, all understanding relies on modeling. I’m not just being pedantic here – because I think this is a key issue w/r/t the model/modeller-bashers out there.

    Let’s take this from the article:

    > Modellers must not be permitted to project more certainty than their models deserve;

    First, who determines how much certainty the models deserve? Who does that without using some sort of model? And if they use a model, have they been permitted more certainty than their models “deserve?”

    I think of the modeling situation that’s the elephant in the room. The IC model of the outcomes of the pandemic. Did the the modellers project more certainty than their models deserved? That seems to me to be a question that necessarily requires a “model” of some sort to evaluate. And then we get stuck in a recursive loop. Did those people critiquing the IC model lay claim to more certainty than they deserved? And then to do those critiquing that critique lay claim to more certainty than the “deserve?’

    The whole use of “deserve,” IMO, is a rhetorical trick – used to shift responsibility. Whose responsibility is it, precisely to determine how much certainty a model “deserves.”

    With the IC case in particular, much of the criticism has come from those who ignore the full range of the outputs of the model, and leverage the “projection” vs. “prediction” aspect to wage political warfare. Here’s a good discussion of that (if you don’t trust the ones that have been had here), in case anyone’s just arrived back from a couple of months stranded on a desert isle:

    https://statmodeling.stat.columbia.edu/2020/05/08/so-the-real-scandal-is-why-did-anyone-ever-listen-to-this-guy/#comment-1368396

    Second – what kind of understanding of this issue doesn’t rest on a model? IMO, there is a question as to whether the authors of the article are selectively applying a set of criteria to point fingers at specific modeling efforts to advance an agenda more accurately framed as an issue they have with specific modeling by specific modelers. If they dress this question up in the disguise of a “just asking questions” frame, when actually they’re pursuing advancing their own models relative to those of others, they do a disservice to true intellectual interrogation. In fact, their work is counterproductive.

    So again, I ask how they come to their list of “musts” for modeling without employing a model? And if they have employed a model, then have they followed their rules?

    I suppose Willard would have an answer. Although I”d prolly not understand it. If he doesn’t tell me to scratch my own itch. Or stop asking questions.

  45. Here’s another thought I had about the suggestion that [a]nyone turning to a model for insight should demand that such analyses be conducted, and their results be described adequately and made accessible. Imagine you’re quite a well-known Professor at MIT who has a model that suggests that the ECS is around 1K. The model is nice and simple (and probably wrong) but you can do aa full uncertainty and sensitivity analysis. Your model is now fit for purpose. On the other hand, a much more complex model suggests that the ECS is greater than 3K. The model is so complex that you can’t really do a full uncertainty and sensitivity analysis, so is not fit for purpose. Is this a scenario that would be of benefit to policy makers? I don’t think so.

    In a sense, the more stringent the criteria we apply in order to make some piece of research suitable for policy makers, the more that ideological researchers can play the game. If you have a very strong ideological bias, you may make sure that your model satisfies the criteria for being considered by policy makers. If you’re more interested in simply asking the interesting questions, you may decide that you’d rather do the interesting science than make sure you’d ticked all the boxes that make your model suitable for informing policy.

  46. Joshua says:

    . If you have a very strong ideological bias, you may make sure that your model satisfies the criteria for being considered by policy makers.

    Or, if you have a strong ideological bias, you make sure to come at a model from an angle which highlights how a model is wrong (which they all are) while employing your own model from another angle while ignoring how your model is wrong.

  47. Joshua,
    Yes, that too 🙂

    As to your earlier comment, I think it does highlight a number of key issues. Who would decide if modellers had projected too much certainty. The thing I find amazing is that a number of the authors are people I would associated with the phrase “science is social” and yet they don’t seem to have thought through the realities of implementing their manifesto. I wonder what some of them would say if we published a manifesto demanding that researchers not publish cimate denial. I doubt that they would be on board 🙂

  48. Willard says:

    > I suppose Willard would have an answer. Although I”d prolly not understand it. If he doesn’t tell me to scratch my own itch. Or stop asking questions.

    He’d tell you to turn your questions into assertions, e.g. norms about models ought to follow some modeling norms, as they rely on some kind of model. That way readers would spot the tu quoque immediately and see the underlying assumption behind the question.

    He could also tell you to cut any appeal to models, as we may use norms outside modeling, e.g. the authors do not make explicit the norms they themselves follow. He’d be more sympathetic to the second line of inquiry, as it leads us (or at least him) to wonder about the ethics of writing manifestos. Here would be some of his modest proposals:

    [MP0] Before writing one, make sure you understand what’s a manifesto.

    [MP1] If your pro-tip appeals to Goldilocks, delete it.

    [MP2] If your pro-tips are not directed at your own community, try to find underwriters before publishing your manifesto.

    [MP3] Make your manifesto short, constructive, and actionable.

    [MP4] Show examples of what you want, not examples of what you don’t want.

    But I won’t tell any of this to Joshua, first because I won’t start to write a manifesto against manifestos, second because it won’t be understood by him, and finally because he has other TV programs to watch.

  49. Joshua asked

    “… whether the authors of the article are selectively applying a set of criteria to point fingers at specific modeling efforts to advance an agenda more accurately framed as an issue they have with specific modeling by specific modelers.”

    Which is why, presumably why they did not write their manifesto in 2008, after the financial crisis
    https://knowledge.wharton.upenn.edu/article/why-economists-failed-to-predict-the-financial-crisis/

    Biding their time I guess …

  50. Much better title: Many ways to ensure that people serve global society: an understatement

    UK=7, US=6, FR=2, IT=2, NL=2, AU=1,NO=1, SP=1,Total=22

    So, no Asia, no Africa, no South America, one North America (6) 5 Europe (14), and one Australia

    Essentially representation by just a few percent of the global population.

    Made by-for-published-in Eurotrash, 😦

  51. jeangoodwin says:

    Thanks, all, for the suggestions–very helpful for preparation.

    Since it should have been the weekend for the Pride parade, I’ll leave you with another in/famous manifesto, from Queer Nation and Act up, 30 years ago:

    https://www.historyisaweapon.com/defcon1/queernation.html

  52. Steven Mosher says:

    Manefesto?

    Hmm. I have a much more practical list of things.

    1. Specification.
    2. Standard benchmark tests
    3. Reports.

    Specification. If the models are to be used for policy SOME sort of specification should be required.
    Today, for example, the models produce results that are +-1.5C of the actual temperature.
    It would be a good thing to establish a specification: Thou shalt get absolute T correct to +-1C
    Pick a number. measure and improve. The democracy of models is not a good thing if some of them are wildly wrong.

    Standard benchmarks. There are tests (4x,1%co increase) that are a good beginning. But folks should be doing and PUBLISHING standard benchmarks. How well is temp hindcast, precipitation,
    sea surface salinity, clouds. Specify acceptable performance. Benchmark and

    Reports: How did you do on benchmarks? how did you tune the model. Show tuning before and after. What changes did you make to the model.. benchmarks before and after.

    And yes we will have to pay for this. Having 100+ models is a bit silly. Focus, down select,
    standard metrics, measure, report, improve. And not on an IPCC schedule

  53. jacksmith4tx says:

    Steven,
    Thanks for the clarity.
    I hear Jeff Bezos is dropping some serious coin on climate and environmental issues and this to-do list would be a nice addition. Considering Amazon’s significant technical resources and global influence the resulting product will likely be policy relevant.

    Why don’t you pitch this to the Berkley team and make a formal pitch to Bezos? Just a few respected co-signers would lend it that little extra imprimatur.

    If you are keeping up with some of the AI modelers they have some projects in work:

    Newsletter:
    https://mailchi.mp/eb8660474930/climate-change-ai-newsletter-february-21-4013613?e=da81221187

  54. dikranmarsupial says:

    SM wrote “ It would be a good thing to establish a specification: Thou shalt get absolute T correct to +-1C”

    However, it would be irrational to set a threshold that was narrower than the interval due it internal climate variability … for which you need a model to estimate. It isn’t like software, where the user can specify what they want. The climate system provides the specifications and society has to make decisions under uncertainty. If that uncertainty is high enough for the decision to be tricky, that is our bad luck, and neglecting uncertain predictions altogether because they are uncertain *is* making an irrational decision.

  55. Dave_Geologist says:

    Thanks for the Gelman link Joshua. I had read it before but it’s worth re-reading now the comments have grown. Some casual observations:

    1) The people who comment most, with most certainty, have read the least. Not just the Ferguson report (continuing to claim a single IFR even after they’ve been pointed to an actual page in the actual report with the table of age-stratified IFRs; claiming that the methodology and origin as a flu model was not declared when there are two references in the report to the original influenza model which go into great detail, far more useful detail for the educated layman than reams of C++), but the comments upthread.

    2) Cherry-picking: only quoting the top of the range and not the full range (probably a pre-picked cherry by the journalist or blogger they’re parroting – see (1)).

    3) Ignoring conditionality: if you don’t do X we’ll have Y deaths, we did X and had many fewer than Y deaths, so you were wrong.

    4) There are a lot of people of little brain who know how to use the Internet. But I guess we knew that already.

    5) After getting bored I skipped to the end to find this choice howler: see (1).

    Um. Sweden much? Considered a better studied disease and its protocols, like perhaps TB? What exactly is Ferguson’s field of study? Epidemiology? Odd that.
    https://en.wikipedia.org/wiki/Neil_Ferguson_(epidemiologist)

    Wiki lists him as an epidemiologist. His academic career is all physics, theoretical physics, math and philosophy. A close friend died of AIDS so he flipped his computer models from “doctoral research investigated interpolations from crystalline to dynamically triangulated random surfaces” to modeling infectious diseases?

    Someone here cuts Ferguson a break because he’s quoted as predicting 50 to 50k deaths over BSE when the media only published the upward bound. That seems odd as well. What is the point of such a prediction? It’s also accurate to predict that between 10 and 7.8 billion people will die in the next 50 years.

    It misses the fact that Ferguson’s predictions and the subsequent destruction of small farmers in GB more likely caused more farmer suicides than deaths from BSE.

    No. It misses the fact of (4). And not only didn’t the poster read the thread (it was pointed out that a range of 50 – 5,000 has very different policy implications to 50 – 50,000 or 50 – 500,000), or the reports or papers, (s)he didn’t check how Sweden is actually doing. Already overtaken France in deaths per million population, soon to overtake Italy and Spain I expect, and possibly the UK (I’m looking at you, Boris). Heading for top spot other than city-states which would be better compared with New York or London than with large countries, and Belgium. And heading for a similar GDP fall as its Nordic neighbours.

    BTW don’t fall for the “we missed care homes” BS. Last I saw about half of Sweden’s deaths were in care homes, just like in the UK and the rest of Europe. To match its Nordic neighbours’ death rate in the general population Sweden’s deaths would need to be 90% in care homes. They’re not. And Sweden knew they’d have to protect care homes if they went for herd immunity. And failed. Shows how hard “protect the vulnerable” really is with a population-level epidemic. Probably needs help from unicorns and leprechauns.

    The ad-hom about him being a mathematical physicist is amusing. As per those who diss climate modellers as jumped-up geographers, then diss Kate Marvel because she’s a mathematical physicist not a climatologist. Might come in handy for coding mathematical models, don’t you think?

  56. RE: Absolute Temperatures
    Absolute temperatures and relative anomalies
    http://www.realclimate.org/index.php/archives/2014/12/absolute-temperatures-and-relative-anomalies/

    Essentially, a +/-1.5C in absolute difference in temperature = +/-1.5/(273+14)=+/-0.5% difference=small change~zero, delta 1C to date =0.005*1C=delta 0.005C~zero

  57. Dave_Geologist says:

    DM and SM, I’m reminded of a head-banging-against-the-wall conversation with can-do management consultants trying to coach us into making a marginally economic field development securely economic. My job was to describe the gas-in-place, its spatial distribution and the uncertainty range.

    Apparently I wasn’t pulling my weight because the engineers had cut costs by lightening the structure, the drillers had cut costs by simplifying the well, the finance people had come up with innovative financing and sales solutions, but I wouldn’t add more gas or narrow the uncertainty range.

    Sorry guys, it’s their job to do clever physical and financial engineering. It’s my job to describe what’s there, Mother Nature in all her uncertainty and her stubborn refusal to change the Truth just because it’s Inconvenient. I can narrow the uncertainty range by drilling another appraisal well, but that will make the economics even worse and is as likely to result in a downward revision of my central estimate as an upwards revision.

    A different attitude applies in wildcatting or more measured exploration. There the geologist is often required to use some imagination and describe what might be there, and quantify the risk that it isn’t there. My employer recognised the requirement for that change in mindset on going from exploration to field development, and included discussion of it in the two-part conversion course they ran (I spent about five years as a tutor on the second part). In some ways that parallels the previous thread: the job is to describe it as it is, regardless of the financial, engineering or societal implications (in the case of extreme-events and human responses or mitigation).

    That’s also true of other areas I worked in like pore pressure, fracture gradient and wellbore stability. When the driller says “can’t you change this, we can’t afford another casing string?”, you have to say “no”. If it’s about safety (PPFG, risk of blowout), they couldn’t over-rule or ignore me, but if it’s financial (WBS, risk of a junked well and sidetrack) they could. In the first case they could go up the chain but they’d have to go to the global technical authority, because my conclusions had already been reviewed and signed off by the regional authority.

  58. dikranmarsupial says:

    D_G yes, the uncertainty is what it is, and we have to work with the information we have if we can’t have the information that we want. Specification is a recipe for ”impossible expectations” as a means of delaying action (i.e. furthering a particular policy agenda) and hence itself needs a manifesto. IMHO the best compromise is for there to be as clear a distinction between science and policy as possible, rather than a blurring of things.

  59. brigittenerlich says:

    Willard, sorry I would have liked to just ‘like’ your comment on Max Black which I only saw today – Black was on my list but I thought people would think, ah it’s that idiot metaphor lady sprouting metaphor again 😉

  60. Pingback: A Very Short And Fairly Understandable Introduction to Models | The unpublished notebooks of J. M. Korhonen

  61. Thanks for the discussion here. I rather like Jean’s idea of building a course or something about what people should understand about models and modelling; I’ve been talking about this for some years as well but never gotten round to actually doing anything about it.

    Is there anyone interested enough in working on a shared resource? I have an elementary “introduction to the scientific method” course coming up again and this would work nicely as a component, given how important models are these days.

    Let me know on Twitter, @jmkorhon_en or here: https://jmkorhonen.net/2020/06/29/a-very-short-and-fairly-understandable-introduction-to-models/

  62. brigittenerlich says:

    Shared resource would be great
    Just seen this by Gavin Schmidt https://muse.jhu.edu/article/758637 – I’ll go on twitter as well

  63. Mal Adapted says:

    As a scientifically meta-literate non-expert, I solicit your opinions of the US National Academy of Sciences’ climate modeling 101 site in this context. TIA.

  64. Steven Mosher says:

    ” Specification is a recipe for ”impossible expectations” as a means of delaying action (i.e. furthering a particular policy agenda) and hence itself needs a manifesto. ”

    it does not have to be. In fact in defense specifications are adjusted according to what uncertainty is achievable. Today 102 different models can get absolute T within 1.5C.
    it is Not an impossible expectation for them to get it within 1.4C.
    It is not an impossible expectation for them to NOT grow the error to beyond 1.5C

    the point is a process

  65. Steven Mosher says:

    “RE: Absolute Temperatures
    Absolute temperatures and relative anomalies

    since some processes like melting ice are temperature sensitive you need to consider that
    delta T is not the be all and end all

  66. Steven Mosher says:

    “However, it would be irrational to set a threshold that was narrower than the interval due it internal climate variability … for which you need a model to estimate. It isn’t like software, where the user can specify what they want”

    I am not suggesting that.

    102 models.
    Some do better than 1.5C

  67. Steven Mosher says:

    the simple question is this.
    if you had 102 models and 100 got the absolute T within 1.45C and only 2 were outside 1.45C
    would you specify 1.45C

    I think DK would object to ANY approach to constrain model selection. even if it was rational and logical.

    DK if you had 102 classifiers ask your self if you would junk the worst 5% of them

  68. dikranmarsupial says:

    “ it does not have to be. “

    No, but it will be. We already have plenty of people wanting to take a wait until we have more information approach or requesting proof, which is essentially an informal, unstated impossible specification..

    “ I am not suggesting that.”

    No, of course not, but it would be what happens as it becomes politics rather than just science. Scientists are already working on bounding/constraining uncertainties without the need for specifications.

    ‘I think DK would object to ANY approach to constrain model selection. even if it was rational and logical.‘

    No, I’d be wary of it, but if climatologists have a good reason for it, they are the experts and I’d listen to them. Modellers already do reject runs that don’t converge to a sensible historical climate IIRC. Best not to assume you know someone else’s position, better just to ask (especially if it is someone that does try to give straight answers to direct questions).

    ‘ DK if you had 102 classifiers ask your self if you would junk the worst 5% of them’

    Not simply because they were the worst. They are part of a representation of the structural uncertainties, and I’d agree with the manifesto that uncertainties shouldn’t be understated.

  69. Dave_Geologist says:

    in defense specifications are adjusted according to what uncertainty is achievable

    You seem to be missing dikran’s point Steven, and mine about the difference between engineering, where you control the specifications, and Mother Nature, where you don’t. It’s something I’m very familiar with in my dealings with drilling engineers. “We can specify our steel to a fraction of a percent, and test that sections of pipe meet those specifications, why do you give an average and a range for the rock strength?”.

    Two reasons. (1) The rocks really do vary in strength, often on a foot-by-foot basis, sometimes by orders of magnitude, and the only way to specify that foot-by-foot is to continuously core the well and perform a $500 measurement every foot. Each measurement takes about 6 hours on one machine, but if you want (2) to be covered, we need to run each sample for a week or two in a more expensive machine. By that time you’ve drilled the well of course, at ten times the original cost, and it’s too late to use the information. (2) The rocks are not linear-elastic and exhibit time- and rate-dependent behaviour. Unless you can tell me exactly how long you’ll take to drill the section, how fast you’ll drill each foot, how fast you’ll rotate the bit, how fast the teeth will wear out, and when and where every unplanned mechanical jerk or disturbance or mud-weight perturbation (actually downhole pressure spikes, driven by those mechanical perturbations) will happen, I cant model even a fully cored well to your engineering precision. Oh, and you need to tell me the mud chemistry and the efficiency of the skin you build on the wellbore wall so I can model the time-dependent poro-chemo-elastic behaviour. Actually I won’t model it, I’ll pay someone to build an FEM. Normally we’d do it at 5m or 10m resolution and it will cost $0.5M, using average properties and ranges, but if you want it foot-by-foot it will cost $10M and take a year. But that’s peanuts when you already spent a year and $1Bn collecting all that foot-by-foot information that it’s too late to use because you’re already drilled the well.

    Alternatively, you could take George Box’s advice and do what your predecessors have been doing for generations and design to the uncertainty. Of course if you don’t want any wells to be drilled again, ever, setting an impossible requirement for a valid model would be a very good way to achieve that objective.

    Unless and until we can predict internal variability like El Niño decades into the future, we just have to live with the fact that there is internal variability on top of the forced response, and you can’t expect a model to predict the phase or magnitude of that variability year-by-year. Doesn’t matter whether that internal variability is 0.1°C, 1°C or 10°C. It is what it is, not some made-up target. Of course if it’s chaotic it will be inherently impossible to predict, even if climatologists and funders though that was a worthwhile goal. (It is of course a worthwhile goal for short-term prediction/medium-term weather forecasting, but that’s a different art which can ignore changes in forcing during the short prediction run and can use much higher resolution.)

  70. Willard says:

    Manifestos may not be what we need right now, or if we do it should target all the institutions involved:

  71. jeangoodwin says:

    Another call for action:

    https://issues.org/real-world-engineering-pandemic-modeling-accountability/

    Modelling is part of the infrastructure for contemporary decison-making, so drawing from engineering practice seems cogent.

  72. Jean,
    I’m absolutely in favour of modellers being completely open and honest about model weaknesses and strengths. One potential issue, though, is that we might not all agree on exactly what these are. There are valid disagreements. However, it is true that most good models have been tested and you have some sense of the region of parameter space in which they’re probably valid. The problem I have with the idea that scientific modellers should learn from engineering is that the number of pandemics, in the case of epidemiology, and the numbers of planets, in the case of climate, against which you can test your models is a great deal smaller than the number of bridges, or building, or planes, against which you can test engineering models. So, in many circumstances this idea just really doesn’t work (in my opinion, at least). This is not to argue against testing models (it’s a fundamental part of model development) but in some circumstances it really simply isn’t possible to test the models in ways that would give you the kind of confidence that we now have in many engineering models.

  73. Brandon Gates says:

    D_G

    > Alternatively, you could take George Box’s advice and do what your predecessors have been doing for generations and design to the uncertainty. Of course if you don’t want any wells to be drilled again, ever, setting an impossible requirement for a valid model would be a very good way to achieve that objective.

    So concisely put, which is good because contrarians can’t hear it often enough.

    But we’re really over a barrel on this one, since we don’t even know what the uncertainty is beyond one model one vote. How does one design to that. My question isn’t entirely rhetorical.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.