The role of mathematical modelling

Christina Pagel and Kit Yates have an article in the British Medical Journal (BMJ) on the Role of mathematical modelling in future pandemic response policy. It’s part of a series in the BMJ on the UK’s covid-19 inquiry. I have had some concerns about how an inquiry might reflect on the role of mathematical modelling in our response to the pandemic and I think this article makes some important points that I really hope are considered when people assess the role of mathematical modelling, both in the response to the current pandemic, and the role it might play in future pandemics.

Mathematical models are really just representations of reality that will always include assumptions and simplifications. They allow us to consider various possible scenarios that can then be used to inform policy making. Models are also continually evolving as more information becomes available, as our understanding of the system improves, and as techniques and computational resources evolve. They’re clearly not perfect, but they are an extremely useful tool when trying to understand what might happen.

Models can also be wrong. Sometimes this is a natural part of the process of development, sometimes it’s because there isn’t really enough data to constrain the model, and sometimes it is because modellers make mistakes. Modellers can also sometimes have more confidence in their models, and model output, than is actually warranted. Modellers should be willing to admit when models are wrong, or when they make mistakes, but – like most humans – sometimes find it difficult to do so.

The article also stressed that communication is key. This is partly to make clear the strengths, and limitations, of the models, but also to try and ensure that people understand in what way the models were being used. Models are often used to consider numerous scenarios, few of which will be close to what actually materialises. For example, if a model considers some kind of worst-case scenario and the results are then used to inform policy so that we avoid this scenario, then the model wasn’t somehow wrong.

Similarly, models might consider scenarios that cover a range of possible policy pathways. Again, that we don’t end up following pathways close to many of these scenarios doesn’t make these models wrong. In some sense, a model is never strictly right or wrong. What matters is how well it does when the pathway we actually follow is close to one of those considered by the model.

Of course, even if communication is taken seriously and done well, it’s still worth being aware that there are some who engage in bad faith and who will use supposed model failures to promote their agendas. There is little that can be done to avoid this, but this mostly highlights the importance of trying to communicate clearly about model strengths and weaknesses, the motivation behind the modelling, what assumptions were made, and which results we should regard as reliable.

One final thing I was going to say is that it can be important to highlight the different kind of systems that can be modelled. As the article says, one issue with epidemiological modelling is the intrinsic inability of most models to capture important facets of human behaviour. This can limit how far into the future one can realistically model. There are other systems for which this is less of a problem, and so one should careful of thinking that a limitation that applies to one modelling situation applies to all situations.

As usual, I’ve said too much, so I encourage those who are interested to read Christina and Kit’s article.

Links:

Role of mathematical modelling in future pandemic response – BMJ article by Christina Pagel and Kit Yates
Covid Inquiry – A series of articles in the BMJ about the UK’s covid-19 inquiry.

This entry was posted in Research, Scientists, The philosophy of science, The scientific method, Uncategorized and tagged , , , , . Bookmark the permalink.

126 Responses to The role of mathematical modelling

  1. Willard says:

    I liked how the main findings were presented here:

    Many of the available state-of-the-art climate models struggle to simulate these rainfall characteristics. Those that pass our evaluation test generally show a much smaller change in likelihood and intensity of extreme rainfall than the trend we found in the observations. This discrepancy suggests that long-term variability, or processes that our evaluation may not capture, can play an important role, rendering it infeasible to quantify the overall role of human-induced climate change.

    However, for the 5-day rainfall extreme, the majority of models and observations we have analysed show that intense rainfall has become heavier as Pakistan has warmed. Some of these models suggest climate change could have increased the rainfall intensity up to 50% for the 5-day event definition.

    https://www.worldweatherattribution.org/climate-change-likely-increased-extreme-monsoon-rainfall-flooding-highly-vulnerable-communities-in-pakistan/

    Of course contrarians may take away that scientists themselves admit that the Modulz are Stoopid, but then we already know that contrarians always win.

  2. dikranmarsupial says:

    You can’t make predictions without a model. The advantage of a mathematical model is that it makes the model explicit – someone else can take it an run it, find problems with it and criticise it. This is something you can’t do with a mental model, which we use to form opinions. This can be a bug or a feature depending on your aims.

    The question for those who don’t like mathematical modelling is what are they going to replace it with and whether that is amenable to proper scrutiny?

  3. dikran,

    The question for those who don’t like mathematical modelling is what are they going to replace it with and whether that is amenable to proper scrutiny?

    Indeed, it’s easy to criticise, especially when something almost certainly has flaws. The real issue is what would you do instead?

  4. Math models of pandemics will always be problematic because they can’t account for game theory — trying to second-guess human behavior makes them intractible.

  5. Mathematical models are much better than less rigorous models like Magic Eightball or Ouija Board, but some humans believe that right answers can be divined out of the ether.

    from Pagel et al: “People who find it hard to accurately estimate the speed of disease spread also find it difficult to see the importance of disease control mitigations and are less likely to implement or observe them. ”

    A lot of people are innumerate. Many of them still manage to get elected or appointed as policy makers. That’s kind of a shame.

    I think the paper did not make sufficient mention of the role of economics with regard to modelling of pandemics. The economic impact of lockdowns are significant and I don’t think we have a model or scale for discussing the economic costs versus the value of lives lost. I think a lot of politicization of modelling is closely linked to the economic impacts of the public policies suggestions that arise from the modelling. There is a very fine line, or perhaps no line, that can be walked by the science communicators and the policy makers where criticism and conflict won’t arise. This has been abundantly clear since the science began to accumulate indicating that smoking tobacco might have deleterious health effects.

    I think we might shorten up the “all models are wrong, some are useful” to more positive frame and just repeat “models are useful” every time the discussion arises. There is nothing that can be said or shown through the modelling that cannot be disputed by parties whose political and economic power and position will be reduced by sound policies that are suggested by running the models.

    Harvesting grapes again today! Yesterday’s pickings are becoming jelly today.

    Lovely weather, in the 70s after a couple days of rain! I am getting excited about getting out for mushroom foraging, but nothing much popping up so far. I think the weather needs to get a little colder and wetter.

    Cheers

    Mike

  6. Small,

    I think the paper did not make sufficient mention of the role of economics with regard to modelling of pandemics. The economic impact of lockdowns are significant and I don’t think we have a model or scale for discussing the economic costs versus the value of lives lost.

    It sort of did address this. The article says:

    SAGE was not charged with the economic modelling of policy options and that is beyond the remit of this article. The inquiry might like to consider separately whether and how economic modelling could have been part of the SAGE remit.

    I agree that it would have been worth looking at this in more detail, but if SAGE were not tasked with doing so, then there’s not much that they could have done. My general view is that it’s the responsibility of policy makers to decide on the make-up, and remit, of the groups from which they get advice. If they don’t include some things that they probably should have, then it’s mostly their fault, not that of the groups who were giving advice.

  7. thanks, I missed that piece. I agree with the general view you expressed.

  8. Joshua says:

    small –

    > The economic impact of lockdowns are significant and I don’t think we have a model or scale for discussing the economic costs versus the value of lives lo[s]t.

    Well, have you ever me mention…. counterfactuals?

    The whole issue of costs versus value of NPIs is incredibly complicated, as it necessarily links into the unknowable of what would have happened absent the NPIs. We don’t know, of course, and we could speculate or even model different counterfactual scenarios but it does seem to me to involve walking into a cascade of models of high uncertainty with myriad assumptions and simplification embedded.

    The situation is much like climate change, imo, and as such it seems to me that there’s a point where you just look at the whole thing as rather unmodelable – and thus you approach it more as a question of risk management where you have low probability but high damage risk.

    Of course, where to draw that line is complex and subject to bias – but as with climate change, a pet peeve of mine regarding COVID is when people seem to think you can model the costs versus values of NPIs without even considering the counterfactual aspect. Just like you can’t really model the costs versus values of aCO2 mitigation without considering potential negative externalities (like particulates), you can’t really model the costs of values of NPIs without considering what would have happened absent the NPIs. It’s not enough to say children suffered from mandated school closures if you don’t consider how many schools might have closed, or for how long, if the spread of infections wasn’t mitigated.

  9. I guess it is nice when so-called academic Moses’s give their sermons from their mounts. All very rational in a so-called very wishy-washy sort of way (Have you fully populated that demographic map?). Oh and e-e-e-e-excu-u-u-u-use me for I am the so-called deeply cynical misanthrope.

    I hope noone here will ever confuse a COVID-19 like diagnostic tracking model with any model that is much more fully deterministic, as we are dealing with individual homo sapiens here. Short of DNA ID’s and real time tracking, these sorts of models will always be a crap shoot with the usual denier suspects royally fudging up the recipe, on purpose mind you, because they do not want to be tracked or told what to do at any time or at any place. Freedom Fighters indeed (I member berry (see SP who for the four Small Hands years this was a constant theme relating to anything before circa 1963 or White people historically in the USA) when in the early 60’s that meant an entirely different thing hereabouts in the so-called UNITED States of America).
    Fancy that academics.

    I also seriously doubt that any so-called FF’s would ever read the BMJ or anything so-called academic in any way, shape of form. Because they really like Jordan Peterson so much or some such.

  10. Joshua,

    but as with climate change, a pet peeve of mine regarding COVID is when people seem to think you can model the costs versus values of NPIs without even considering the counterfactual aspect.

    Indeed, and there seems to be a lot of this going around. People highlighting how people need energy to improve their lives therefore fossil fuels are good, without acknowledging that continued use of fossil fuels will almost certainly also have many adverse effects. Similarly, those highlighting the negative impact of having closed schools (for example) without acknowledging that not having done so could have also had many negative impacts. Potentially, as you suggest, leading to school closures anyway.

    I’m perfectly happy to accept that the way we responded to covid may not – in retrospect – have been the ideal way to do so. However, it’s easy to say this in hindsight, but much harder to have known it in advance. Also, even if the response could have been different, I don’t think that the way we did respond was necessarily wrong, given what was known at the time. In fact, there are some compelling arguments suggesting that it might have been optimal to impose stricter interventions earlier than was done in many cases.

  11. ATTP,

    As usual, you are thinking way too rationally (and framing things from a mostly scientific basis). I’d like to think that those of us that still have at least half a brain would listen to such plain logical rational thought, however … 😦

  12. Joshua says:

    Anders –

    A paper I thought interesting – regarding counterfactuals.

    https://jech.bmj.com/content/75/11/1031

    > without acknowledging that continued use of fossil fuels will almost certainly also have many adverse effects.

    As far as concerned, even assessing the “cost” of energy needs to be place in context. It drives me nuts that people talk of how we can’t afford renewable energies because of the “cost” – when what they are doing is conflating cost and price. I mean sure, if there’s a calculation of the cost that seems too high, that would be important – but you can’t recall assess the cost if you haven’t addressed the full range of impact from continuing to rely so much on fossil fuels. Without doing so (admittedly a very difficult task) then the notion of “cost” is pretty meaningless, imo.

    > I don’t think that the way we did respond was necessarily wrong,

    Further, I think wrong versus right isn’t a very useful frame, except if your purpose is advancing a political agenda, demonizing people, etc.

  13. Joshua,

    Further, I think wrong versus right isn’t a very useful frame, except if your purpose is advancing a political agenda, demonizing people, etc.

    Good point.

    As far as concerned, even assessing the “cost” of energy needs to be place in context. It drives me nuts that people talk of how we can’t afford renewable energies because of the “cost” – when what they are doing is conflating cost and price.

    Indeed. This reminds of a comment I saw on Twitter that irritated me slightly. It happened to be pro-nuclear person, but that isn’t all that relevant. They were arguing that we need energy to allow people to escape poverty, which seemed to miss the key point that if people don’t have the income to pay for energy, then it’s unlikely that anyone is going to invest in building the necessary infrastructure.

    Of course, I’m not suggesting that societies shouldn’t invest in energy infrastructure so as to help people escape from poverty, it’s that I don’t think we currently live in societies where this ias regarded as a way to do so.

  14. Joshua says: “there’s a point where you just look at the whole thing as rather unmodelable – and thus you approach it more as a question of risk management where you have low probability but high damage risk.”

    I could not agree more. I think that is how we are currently doing things, and assessing the low probability/high damage is probably not that difficult. Talking about these events very much will get you tagged as a doomer or alarmist. The most popular path forward with regard to those risks is to stop talking about them, turn off the lights and begin whistling. Since I am retired, I was able to turn off the lights and whistle my way through Covid in the comfort of my home. It was not too bad. It is possible that climate change will turn out the same way. I don’t want to be a doomer or alarmist anymore. It’s not fun. It’s much better to relax and keep an eye out for good news. On climate change, here is the good news I find today:

    Fed programs funded at 3 billion dollars eventually for smart farms that will sequester carbon in the soil. That is a lot of money. Piles and piles of cash.
    https://www.usda.gov/media/press-releases/2022/09/14/biden-harris-administration-announces-historic-investment

    I love this one. An article from Tina Casey on regenerative farming: https://www.triplepundit.com/story/2022/usda-carbon-sequestration-farms/755021

    Methane reduction silliness: https://www.globalmethanepledge.org/ A bunch of countries are wasting their time thinking about reducing methane, which is a flow gas. As if a reduction of 0.3 degrees really matters. Bless their hearts!

    as for pandemics: I think it can’t hurt to study New Zealand because they have done very well with Covid and maybe won a rugby world cup at the same time:

    https://www.stuff.co.nz/opinion/129890626/we-can-win-a-rugby-world-cup-so-we-can-beat-the-next-pandemic

    Got to go out and pick more grapes. Another beautiful day in PNW. I notice that some of our iconic Western Red Cedars are suddenly browning off and starting to toss needles like crazy. I think they may be in transition to deciduous status. It’s great watching nature at work.

    Cheers
    Mike

  15. Joshua says:

    Willard –

    Except that he’s Canadian.

  16. Joshua says:

    Anders –

    > , it’s that I don’t think we currently live in societies where this ias regarded as a way to do so.

    I think that exposes the weak underbelly in the “think of the poors” arguments so frequently seen in the “skept-o-sphere.”

    Not to say that “skeptics” don’t actually care about the poor, but surely there are more efficient ways to support people with no resources than to argue that a higher price of energy makes energy unavailable to them. If they have no resources then it won’t be available to them even if the supply is 100% fossil fuels.

    Same as it ever was.

    Sure – there’s a middle ground where people who have limited resources can get relatively more energy if the prices is relatively lower – but at what cost will that lower price come? And surely there can be more efficient ways to subsidize the energy needs of those with limited resources than by attacking subsidies for renewables.

    What’s most frustrating to me is that a more equitable redistribution of resources could allow us to eliminate fossil fuels, without harming the poor. It would just mean that the extremely wealthy would have to have less excess resources. But it’s often the same people who say “think of the poors,” when we talk about renewables, who find the very notion of redistribution as completely untenable. What’s even worse is the contempt those same people, so worried about the idea of sacrifice from the extremely wealthy, will then turn around and express contempt towards the “elites.”

  17. Joshua says:

    Mike –

    Here in the Hudson Valley it begin looking like the fall in early August – leaves turning brown and falling off because of I credibly long with no rain. I haven’t seen anything like it in the 8 years we’ve lived here.

    I haven’t looked to see how the drought stacked up historically, but it was amazing how viscerally disturbing it was to see the trees so “stressed.” I imagine it’s not a big deal for them, that they’ll bounce back – but it was nonetheless important to see how much we take for granted about the nature around us.

  18. You all so-called talking ’bout the poors is exactly the same thing as the deniers talking ’bout the poors. You, of course, don’t think so but …

    “Of course, I’m not suggesting that societies shouldn’t invest in energy infrastructure so as to help people escape from poverty, it’s that I don’t think we currently live in societies where this is regarded as a way to do so.”

    If you don’t change the system as it currently exists, maybe, just maybe, you all need to stop complaining about climate change.

    Because, if you don’t, then don’t expect others to do your bidding. This is my one and only recurring theme here. The same old same old, nothing ever changes.

    Do something besides profiting from $100+K (US) EV’s that are fully made from FF’s! The oldest marketing strategy of layering the 1st 1%, then the next lower 1%, ad infinitum, ad nauseam.

    I was and I still am a so-called poor (quite certain of that given past encounters with all of you not now poors), what is your excuse? [Snip. Chill or go play somewhere else, Everett. -W]

  19. Joshua says:

    Everett –

    > If you don’t change the system as it currently exists, maybe, just maybe, you all need to stop complaining about climate change.

    I don’t really understand what that means. I can’t observe that poor people exist and at the same time observe that we do a sub-optimal job of helping them irrespective of whether we use fossil fuels or renewables and at the same time observe that we’re taking on risk by pouring aCO2 into the atmosphere?

    Why not?

    Do I have to solve world hunger before I can observe that nuclear weapons proliferation represents an existential risk also?

  20. Joshua,

    It has become extremely hard for one such as myself to maintain my composure in the face of known facts also known as history. Not the normal sanitized history as taught to you in European or American textbooks but the actual nature of the beast also known as homo sapiens. Also known as standard misanthropic tropes.

    From where I sit, history has taught me that there are tops and that there are bottoms. In other words the penetrators and the penetrates.

    We are in our current situation due almost entirely to the penetrators. Otherwise known as White males.

    So as I see it, and saying this as a White male, all I see are injustices perpetrated and perpetuated by said same upon the rest of the world.

    So the equation is really simple to right the historical wrongs caused by White males throughout most of the world (the only exception being the far east, now mostly called China). In this, there is a give and take, White male cultures must give to those less fortunate and abused by the multitude of past White male aggressions.

    So, if you really want to solve the so-called climate crisis, put up, or should I say give up, a very reasonable fraction that history has deemed as unfair takings and give those unfair takings back to those less fortunate. I think this is known as reparations or some such.

    But so far, Eurotrash has almost entirely refused to do so, being their usual sanctimonious selves, with the shear audacity to continue to dictate to the rest of the world how they should behave via largely White male rules. While at the exact same time, talking the talk while mostly avoiding walking the walk.

    Long story short? Do not expect the rest of the world to fix what Eurotrash has created, they broke it and they need to fix it ASAP.

    AlGoreisFat, AlGoreneedstogoonadiet. South Africa, apartheid.

    So like a teeter-totter, a balance is required, some must go down so that others may rise up. History has taught us much, it is best to know that truth, and not to repeat past mistakes, but attempt to fix them instead. And in so doing, there will be balance among homo sapiens for the very 1st time. That is what I hope and pray for, that is my one true optimism.

  21. The U.S. and the Holocaust
    https://www.thebetterangelssociety.org/films/the-holocaust-and-the-united-states/

    Watch it and weep, American xenophobia circa WWII. Mirrors Texas azzmat Abbott and Small Hands today.

    Just another example of real history.

  22. Ben McMillan says:

    Along the lines of the OP, I think that articles like the BMJ one should really tackle the problem of bad faith misrepresentation (e.g. conflating projections with predictions) head on because even though clarity of science communication is necessary, it isn’t enough, and there is a risk of misidentifying the communication problem and wasting your effort.
    Traditionally, the standard of ‘polite discourse’ in journals is to pretend for the sake of argument that the people who are part of the discussion are working in good faith (even if they have different goals), and that is only useful up to a certain point.

    Also, the problems about long-term prediction are worth noting. At some point, you are using interventions to try to steer the epidemic in the face of oncoming events, which means that ‘prediction’ becomes a much less relevant concept, and projections are only useful in the short term. You’d just like to know how well the steering works, and which lane would be best.

  23. Dave_Geologist says:

    Paul, game theory is a mathematical model. Invented by a mathematician and developed by other mathematicians.

    And the inner workings of the full Imperial College model are pure game theory (you need to read the original flu paper to appreciate that). Some of the quick-look scenario calculations were systems of coupled differential equations, but they were calibrated to a game-theoretic original.

    The issue is whether the agents in the game are properly represented, and as ATTP touched on, “the intrinsic inability of most models to capture important facets of human behaviour”, one aspect of that behaviour being whether and how those agents change their behaviour throughout the game.

    Sweden’s first wave is an instructive example. By the peak, Swedes had locked themselves down almost as much by personal choice as the other Nordics had been locked down by law. A model which said “agents will obey the law and go no further” would have got that Swedish response badly wrong. A model which said “agents will respond to rising deaths and hospitalisations by voluntarily restricting their interactions” could have done rather well.

    Incidentally it’s a myth that Sweden enforced nothing. See Table 3. The main differences were no primary (junior) school, hospitality or shop closures, and no shelter-in-place lockdown. Nevertheless, journeys into Stockholm fell by three-quarters, journeys to work by a quarter, and park use more than doubled. Figure 1 shows that in practice Swedes doubled the stringency of their response during the month subsequent to the introduction of compulsory measures in mid-March. A model which said “agents will obey the law and go no further” would have successfully represented the other three countries, where you had a sharp rise to a plateau. A model which successfully explains all four would have to say “agents will obey the law, and will also respond to rising deaths and hospitalisations by voluntarily restricting their interactions”. If your calibration dataset was only the other three, you’d have had to guess at that second part. Sweden shows that it was needed*.

    That’s really no different in principle from a cheater-cooperator model in animals, where agents respond to the observation that another agent cheated by changing their response to that agent in their next encounter. Or one of those fish species where the largest female turns male when the harem-holding male dies.

    The hard part is not coding the maths, the hard part is correctly predicting those changing agent responses in a situation which was unprecedented in living memory.

    * I have seen commentary that it worked in Sweden because they trust their governments and will follow advice not enforced by law, and have a sense of the common good which makes them willing to voluntarily sacrifice aspects of their life to help others, but that it might not work in a less trusting or more individualistic country. So the counterfactual “what if the UK or USA had done like Sweden did” might have had a very different outcome.

  24. Dave_Geologist says:

    Ben: “Events, dear boy, events”.

    Barnard Castle eyesight test.

    The graphs in the supplementary material show a permanent drop in confidence in government after the story broke, in England but not in Scotland or Wales (for those outside the UK, the big stuff like international travel bans and furlough schemes were the UK Government acting UK-wide, but the timing and implementation of lockdowns and other NPIs were devolved, so on those the UK Government was speaking for England only).

    It probably didn’t help that confidence had already been hit by Boris Johnson’s botched announcement of the lifting of lockdown in England. IIRC that was more an avoidable own goal than an unforeseen event. Down to his penchant for leaking proposals to let friendly journalists “fly kites”, so that expectations were not fulfilled and it looked like no-one was in charge or could make up their mind, as to the actual news conference.

    Good luck incorporating that sort of thing in your model, not to mention predicting the response of the public in practice, not just in response to opinion polls where their actions are not potentially putting their own lives on the line. It was widely said at the time that the public would not stand for another lockdown because of it, but they sorta kinda did (the Christmas semi-lockdown).

  25. Ben McMillan says:

    I think in this analogy the external events (eyesight tests) are like wind buffeting the car, and you can only respond with steering (NPIs) after the fact. “How much do NPIs impact R?” is still something the model can tell you, even if they don’t contain a full model of the internal dynamics of the Tories.

  26. at Joshua: In the case of the Western Red Cedars, there is regional concern with dieback.
    https://www.opb.org/article/2022/09/06/western-redcedar-trees-are-struggling/

    I hope what I am observing this year is seasonal dieback. We have had a bit of rain and could use more.

    https://storymaps.arcgis.com/stories/1405dab5f59246aa83849ec43f72b15a

    At D the G and some others: I found your discussion of game theory on pandemic response being related to the variations in nations’ cultures to be pretty interesting and informative. thanks for that

  27. Dave said:

    ” game theory is a mathematical model.”

    Yes, it’s an intractable mathematical model. We were warned to stay away from that stuff and that’s why we do physics instead of economics or wondering whether some redneck is willing to get vaccinated.

  28. Dave_Geologist says:

    It’s not intractable Paul. Just difficult, because people are people are not ants. By most people’s definition an intractable mathematical model is mathematically intractable. Your rednecks are psychologically or economically intractable because they’re unpredictable and sometimes act against their own best interests (although actually, IMHO, their responses to stimuli are usually pretty predictable). Not mathematically intractable. You can make agents act against their own self-interest and see what happens. You can give them imperfect information. Etc.

    And once events have happened (first wave) you can calibrate much better for the next waves than you can when you’re extrapolating the response to a novel threat (Covid) from the response to a known threat (seasonal flu in a poor-vaccine-match year).

    And a lot of stuff that was genuinely intractable in von Neumann’s day, or even Nash’s unless you were NASA, where there is no closed-form solution to solve with pencil and paper, is tractable now by Monte Carlo simulation.

    Going back to your OP, it’s not a failure to account for game theory that causes that. It’s humans being unpredictable. It’s not knowing exactly what parameters to plug into that game theory you’re already accounting for because you’re using it.

    Of course you can still build a range of responses into your agents in a MC simulation and report ranges – for example will granny dying of Covid change an NPI-sceptic’s behaviour? Make some agents comply with NPIs and some not. Once the death rate is established, let a random subset of the non-compliant ones lose their granny, and let 1%, 5% and 50% of them become compliant in response. See how sensitive it is to that range. If most people are compliant, maybe the death rate is so low it doesn’t matter whether that’s 0% or 100%, because herd behaviour protects their grannies too. But maybe that changes in the next wave when death rates are high.

  29. Dave,

    Actually I would listen to whatever the experts had to say, with, but especially without. mathematical models.

    It is called best practices and we have known about them for a very long time now. The school of hard knocks, aka historical realities, can’t suddenly be superseded by some unknown mathematical model outcomes.

    The real questions are did so-called mathematical models change in any real way, shape or form the recommendations (RE: Fauci in the USA, as the UK is sort of conflicted on these maters IMHO) wrt COVID-19 (now that would be an interesting and worthy peer reviewed read).

  30. Pingback: Tra fantasia e fisica – ocasapiens

  31. Dave_Geologist says:

    smallbluemike, I remember an anecdote from one of the early British footballers who went to play in Italy (maybe Denis Law, because he considered himself a Jack-the-lad).

    He was surprised at the rigour which which they were treated before games: overnight in a hotel even for home games, eat exactly the food the club specified, no alcohol, early night, and no wives or girlfriends.

    He said to an Italian team-mate that he was surprised it was so much tighter than in England: he thought Italians were all relaxed rule-breakers who enjoyed a good time.

    He was told it had to be strict because Italians are all relaxed rule-breakers who enjoy a good time.

  32. Dave_Geologist says:

    Per the discussion with Paul, Everett, the footnotes to the March Imperial College report which led to the first UK lockdown did change policy (setting aside the disputed question of whether there was or was not a plan to do nothing except expand the NHS, and go for herd immunity).

    But because the input parameters had changed a few days earlier, not the algorithm.

    Twice as many hospitalised people in Northern Italy needed intubation as in Wuhan, which we can now attribute to demographics (proportion of elderly and of people kept alive despite chronic health conditions). And the NHS realised that it could only staff half the number of ICU beds they’d told the modellers they could, even assuming they could build them out in time. From memory, there was a third but smaller factor, I think an upward tweaking of R0 to account for European social habits, family make-up, housing, travel and work practices etc., compared to those in a purely urban Chinese megacity. Again based on Italy, which is also not ideal because Nonna is more likely to be living with the extended family there than in the UK.

    And yes, listen to experts and don’t rely only on mathematical models. But that was the point of SAGE: most of their experts were not mathematical modellers, and they had to reach a consensus that said, yes, it is now the time when we have to introduce NPIs or face mass mortality.

  33. dikranmarsupial says:

    EFS “Actually I would listen to whatever the experts had to say, with, but especially without. mathematical models.”

    The trouble there is identifying the experts you want to listen to. At WUWT Murry Salby and Herman Harde are experts on the carbon cycle and Monckton on statistical trend analysis.

    The advantage of mathematical models is that it can help you identify when someone is not an expert because they have set out their position explicitly where it can be scrutinized and found wanting.

    The general public isn’t quite as bad as WUWT, but there is a spectrum of gullibility.

  34. DM,

    I would hope you and others here would know implicitly what I meant by so-called experts, as in academia and government, hired for their expertise and not their politics. Small Hands and his ilk have never been so-called experts, the only things that they are are liars, cheats and charlatans. But you know, let us overthrow the government, because fake experts.

  35. DtG said:

    “It’s not intractable”

    Of course it is. In layman’s terms, second-guessing outcomes makes it impossible to reach a stable prediction. Economists have intuited this for awhile and have names for it, such as the Lucas critique, Campbell’s law, Goodhart’s law, etc.

    “computing the Nash equilibrium for a three-person game is computationally intractable. That means that, for any but the simplest of games, all the computers in the world couldn’t calculate its Nash equilibrium in the lifetime of the universe”

  36. Willard says:

    > computationally intractable

    NASH is hard, but not NP-complete, tho:

    We show that finding a Nash equilibrium is complete for a class of problems called PPAD, containing several other known hard problems; all problems in PPAD share the same style of proof that every instance has a solution.

    Source: https://people.csail.mit.edu/costis/simplified.pdf

    ***

    John Cochrane (a libertarian, but at least a rational one) came up with a suggestion that may be of relevance here: economists make conditional predictions, not predictions. (I owe this to the Rational Reminder podcast.) Searching around I found in his lecture notes an example related to climate:

    The unconditionally expected temperature tomorrow in Chicago is about 60F, the overall average. If it’s July, or if you know that today’s temperature is 90 degrees, the conditionally expected temperature tomorrow is high, maybe 85 degrees. The actual, or ex-post temperature tomorrow will vary beyond this expectation.

    The question for us is whether stock returns are a bit like this; whether there are times, measured by the variable X, when the coin is 51/49 and other times when it’s 49/51. We’re asking if there are “seasons” in stock returns, not whether anyone knows exactly what the return (temperature) will be tomorrow.

    Source: Probably an unstable repository.

    The mention of Chicago is not fortuitous. It may be a Very Freshwater take. So take it with a grain of salt.

  37. russellseitz says:

    The rise of the Freshwater School aptly coincided with that of the Great Lakes, which reached Peak Gitchee Gumi in 1983. It’s probably just as well that the Great Salt Lake never spawned a Mitt Romney School of Macroeconomics:

    https://vvattsupwiththat.blogspot.com/2022/07/truly-state-of-art-drought.html

  38. And their I was thinking what happened to Nic Lewis as they were so completely wrong, very objectively mind you, on COVID-19!

    Well they are back with, wait for it …
    OBJECTIVELY combining climate sensitivity evidence
    https://link.springer.com/article/10.1007/s00382-022-06468-x

    I await a more fully impartial response from Sherwood, et. al. (2020, are you reading this Dr. James Annan).

    No linkies to either the GWPF or WTFUWT? Sorry about that one, but I absolutely hate fake experts and their fake expert websites.

  39. Joshua says:

    The activist Nic Lewis has failed to demonstrate objectivity in the past.

    Thought I’d offer this podcast (there’s a transcript) in case anyone’s in the mood for optimism.

    I think the oppositional forces make this outlook too optimistic – but it can’t hurt to dream.

  40. Joshua says:

    Transcript: Ezra Klein Interviews Jesse Jenkins https://nyti.ms/3LrFW61

  41. I haven’t really had a chance to fully understand Nic’s paper, but it seems that he has re-evaluated climate sensitivity for the LGM, the PWP, and the PETM, and got much lower estimates than other studies have determined. He’s then combined them to get a much lower estimate than Sherwood et al. 2020.

    It’s really quite an amazing bit of work. The main question I have is, how did so many other researcher get things so wrong? Of course, the answer may well be “they didn’t”.

  42. Dave_Geologist says:

    Paul, why would I want to compute a Nash equilibrium?

    Setting aside questions of whether proving that one instance is mathematically intractable proves that all instances are intractable in all circumstances for all purposes. (I can’t be bothered checking whether that is closed-form intractable, whether the outcome is chaotic and multi-valued as in the logistic parabola, or whether it circulates around a stable attractor as in the Lorenz butterfly – it doesn’t matter, and Google can’t find your quote anyway.) George Box’s maxim applies here, as always.

    We’ve known for centuries that the orbital three-body problem is mathematically intractable in that sense. Didn’t stop NASA slingshotting probes around the solar system using the surprisingly predictable locations of those unpredictable planets. Or forward-modelling orbits tens of million of years and predicting stability. Or identifying resonances which reduce the degrees of freedom (although there is an interesting recent paper which suggests that a geologically recent break-up of a small moon de-resonated Neptune and Saturn and formed the rings).

    If there is a bifurcation or three, you can still model and prepare for all three. And I bet you’ll often find that they’re surprisingly close together in the total parameter space. If there’s a stable attractor, it’s extremely useful to know that the permissible parameter space is close to that attractor and that something as mild as strep throat or as deadly as Ebola is unlikely. You don’t even need to know that it’s not impossible, because you treat the very unlikely cases like Ben’s analogy of a car in a gust of wind: cope with it in the unlikely event that it comes along. And of course, once you’ve had the first wave you can calibrate.

    I’ve linked previously to a paper which shows why you should not waste time on finding the Nash Equilibrium in pandemics, especially early on (the context being the oft-repeated claim that diseases always get milder as variants which leave carriers alive longer spread better). Too many moving targets. The disease is evolving, the host is evolving, the environment is changing, etc. Naturally you can put that evolution into a MC model, stochastically as genetic drift or directed by selection. Or let the model drive selection.

    The Nash Equilibrium might be good for explaining diseases which established themselves back when we had stone- or bronze-tooled agrarian societies which didn’t materially change for millennia. It’s no more useful than a random guess for situations where we’re intervening repeatedly with modern technology. Back to George Box. I don’t care what Covid-19 will be like in 5,000 years time, and neither should you.

    Pity the poor Australian rabbits: the variants which are winning out decades after myxomatosis was introduced are both more transmissible and more deadly. The transmissibility advantage is big enough to overcome the host cull.

    When serious people face serious matters they have four choices:

    (a) Give up and say it’s all too hard.

    (b) Make a wild guess.

    (c) Do what best suits your political or religious beliefs.

    (d) Do the best you can with the tools and data available, with the aim of doing the greatest good for the greatest number.

    In practice almost nobody chooses (a), although a lot of the (b)s and (c)s use (a) as a pretext for choosing (b) or (c).

    Me, I’m just glad that most of the world chose some version of (d).

  43. russellseitz says:

    “he has re-evaluated climate sensitivity for the LGM, the PWP, and the PETM, and got much lower estimates than other studies have determined.”

    Has Nick explained why the Snowball Earth episode count has not changed to fit his theory?

    What ever will Paul Hoffman and Dan Schrag say?

  44. Agree that a set of projections is more in vogue than a single prediction. This is equivalent to nested if-statements that cover a large cross-section of outcomes. The intractable game theory equivalent is that these if-statements cover every possibility and double back including counter-intuitive human decisions. But then the set of projections is no better than random.

    DtG is implying that the same thing can happen in physics — with for example chaotic orbits. But consider those chaotic orbits are a natural response, yet much of physics involves forced responses that override the natural response and thus make the behavior predictable. Forcing inanimate physical objects is different than forcing humans to do as they are instructed, which again explains why I work on the challenging geophysics models instead of playing the stock market.

    “(a) Give up and say it’s all too hard.”

    That’s climatology according to Lorenz. Half the scientists think that natural climate change is chaotic while the other half think its random. I choose not to give up that easily and instead do the next best thing to a “wild guess” and make educated guesses — applying an ansatz as a premise — when solving problems.

  45. ATTP,

    JA has a new paper on the LGM which he discusses here …
    BlueSkiesResearch.org.uk: EGU 2022 – how cold was the LGM (again)?
    http://julesandjames.blogspot.com/2022/05/blueskiesresearchorguk-egu-2022-how.html

    And is now available in its final form here …

    Click to access cp-18-1883-2022.pdf

    https://cp.copernicus.org/articles/18/1883/2022/cp-18-1883-2022-discussion.html

    I think that Lewis gets a much thinner posterior PDF then others (given its thinner nature and known zero lower bound this then leads to muck lower estimates IMHO), like what would appear to be the case with JA’s new paper versus the 2020 Jessica Tierney paper …

    “Our new headline result is -4.5±1.7C” Current JA paper
    “coming up with -6.1±0.4C” Tierney paper
    “previous estimate of -4.0±0.8C (both ranges at 95% probability)” JA 2013 paper

    Most of this stuff is way above my pay grade so to speak, thus the need to listen to other climate science experts. Your own thoughts are indeed most welcome.

    Any updates from others here are also most welcome, meaning other posts/links from real experts or your own technical expertise on these types of analyses (I am only an old skool frequentist with an ever more limited skill set as I age (almost 69) further into my gone emeritus years)?, I will skip those posts leading to fake experts thank you very much.

  46. Figure from AR5 (h/t Peter Gruenwald) (“PDFs-for-eight-of-the-observationally-based-ECS-estimates-featured-in-Figure-1020b-of”) …

    I do wonder what the (or a) similar figure would be from AR6?

  47. It is more than simply misleading to frame the IPCC AR6 WG1 ECS estimate simply from one paper (Sherwood (2020)) as Lewis and the GWPF bogus PR statements would indicate. In fact I would call it an outright LIE! …

    Click to access IPCC_AR6_WGI_Chapter07.pdf

    See Figure 7.18 for example (currently unable to find that figure in graphical form online at the IPCC (or other) website(s)) …

    I am going to need the Figures link, The Chapter 7 links (The SOM or whatever for that chapter and any annexes including source data (I can see that link to source data but I think that before going there the written and existing visual records are most important)).

  48. OK!!!

    https://www.ipcc.ch/report/ar6/wg1/resources/data-access
    Everything from IPCC AR6 WG1 is located at above link. However figures are zipped afaik.

    I am really hard pressed to believe that Sherwood (2020) or some IPCC AR6 WG1 Chapter 7 author(s) was(were) not aware of this paper (as a reviewer or in some form of draft/preprint).

    It is rather obvious how this would (has to date) play out if there was relatively no immediate response. And so it is. A PR win for climate denial. The damage has been done, so that no response at any time will remove this denier PR win from the history books.

    I am deeply saddened. I sort of gave up on this climate stuff because I now have other interests. How can one become even more deeply cynical and misanthropic?

    You all simply do not get it IMHO.

  49. russellseitz says:

    “Agree that a set of projections is more in vogue than a single prediction. This is equivalent to nested if-statements that cover a large cross-section of outcomes. The intractable game theory equivalent is that these if-statements cover every possibility and double back including counter-intuitive human decisions. But then the set of projections is no better than random.”

    One of the general rules of ClimateBall is that the Precautionary Principle is not immune to snowballing when iteration gets out of hand in models , or falls into the grasp of playbook writers.

  50. Well only time will tell, but as of right now only denier sites are the ones mentioning the NL paper. Nothing seems to have broken through to the MSM … yet. So for now optimism is in order. I also expect that whatever does break through to the MSM will be balanced (Fixed Noise, et. al. are not the MSM imho) as opposed to slash and burn denier PR statements.

  51. Bayesian deconstruction of climate sensitivity estimates using simple models: implicit priors and the confusion of the inverse
    James D. Annan and Julia C. Hargreaves (21 Apr 2020)
    https://esd.copernicus.org/articles/11/347/2020/

    “There is also a strand of Bayesianism which asserts more broadly that in any given experimental context there is a single preferred prior, typically one which maximises the influence of the likelihood in some well-defined manner. The Jeffreys prior is one common approach within this “objective Bayesian” framework. However, it has the disadvantage that it assigns zero probability to events that the observations are uninformative about. This “see no evil” approach does have mathematical benefits but it is hard to accept as a robust method if the results of the analysis are intended to be of practical use. In the real world, our inability to (currently) observe something cannot rationally be considered sufficient reason to rule it out. We do not consider objective Bayesian approaches further.”

    🙂

    J&J have many lucid discussions on Lewis and so-called objective priors, Bayes statistics in general, but way too many previous posts to list here.

  52. Willard says:

    So we have 1.64K in 2014-09, 1.76 K in 2018-08, and 2.1K in 2022.

    I therefore predict that the Lowest Bound of Justified Disingenousness ™ should reach 3,07 by 2038.

  53. Willard,
    Yes, it does seem to have increased with time.

    If I understand James and Jules’s paper that EFS highlights, it seems to be more indicating how the choices of priors can influence the results, rather than presenting an actual estimate of the ECS. Seems to me that what Nic’s paper has done is rigidly applied the Bayesian method that he thinks is the correct one, produced estimates for the ECS that tend to be lower than others, and then combined them to get a result that suggests the ECS is lower than presented in Sherwood et al. It’s convenient, but not obviously correct.

  54. ATTP,

    It all comes down to using a Jeffreys prior throughout all (or most of) of the Lewis papers. I think that there is agreement that a uniform prior has major issues due to where it is cutoff at the high end and that is a flatline to begin with in the first place.

    Simply put, we do have some ideas of where ECS is (~1.2C I believe for a no feedback situation) and some ideas of the posterior PDF form, from, of all things, climate models, including the CMIP6 models, regardless of their propertied high estimates.

    We also have very good ideas of what the Lewis Jeffreys priors look like from that most recent paper. And, IMHO, those do not look like anything would use in practice given our a priori knowledge of ECS …
    https://media.springernature.com/full/springer-static/image/art%3A10.1007%2Fs00382-022-06468-x/MediaObjects/382_2022_6468_Fig3_HTML.png?as=webp
    (a) above looks nothing like a complete PDF of any prior that I would use, especially since all posteriors (even all of the Lewis) appear to be more or less positive bound and appear to be positive skewed quasi normal distributions.

  55. EFS,
    Yes, I did wonder about that and we did discuss this with Nic a number of years ago. There was also a brief discussion on Andrew Gelman’s blog where he pointed out that there really isn’t such a thing as a truly objective prior. So, Nic’s insistence that his prior is somehow the correct one is really just a judgement on his part.

    Here’s Andrew Gelman’s blog post

    https://statmodeling.stat.columbia.edu/2015/12/07/use-of-jeffreys-prior-in-estimating-climate-sensitivity/

    Here’s my post where it was discussed in more detail.

    Bayesian estimates of climate sensitivity

  56. Also the PETM was 55 million years ago, so regardless of CO2 doubling relationships, the World was in a completely different configuration (plate tectonics, formation of the Atlantic Ocean, etcetera) and geological processes back then would also most likely have been very different. Thus making the CO2 doubling relationship more difficult for anything beyond first principals imho.

    It is interesting to look at, but imho, there are still too many unknowns.

  57. ATTP,

    Sort of really sorry for derailing this thread.

    And you are most correct as you have indeed discussed this before …

    Nic Lewis’s prior beliefs
    https://andthentheresphysics.wordpress.com/2014/07/31/nic-lewiss-prior-beliefs/

  58. dikranmarsupial says:

    FWIW I don’t think there is a game theory aspect to projections (or epidemiological models) because the scientists are not players in the game, they are just providing information for those that are. The scientists are not trying (at least while they have their lab coats on) to get the politicians to take some particular action or to meet some goal, just provide the information that the politicians require in order to take the actions required to give the best chance of an outcome that matches their values. Of course there is a game theory aspect here, but it isn’t in the science, it is in the politics and economics which depend on the behaviour of people, rather than climate physics (or biology) which doesn’t. This is why politics and economics are more difficult than physics and biology.

    The best way to do so is to provide a bunch of conditional predictions, known as “projections” that the politicians can use to determine the likely climate response of various courses of actions. There is nothing else the scientists can do (unless you have been reading too many WUWT articles).

    Of course the scientists are citizens, just like everybody else, and when they take their lab coats off they have a right to argue for policy just like everybody else. The difficult part is communicating clearly whether the coat is on or off.

  59. dikranmarsupial says:

    ATTP – “there really isn’t such a thing as a truly objective prior.” I think it is more that “objective” in this context is a term of art and doesn’t mean what the lay-reader is likely to think it means (a bit like Granger “causality” isn’t what we normally mean by “causality”). This is often a feature rather than a bug.

  60. dikranmarsupial says:

    Reminds me of Chesterton’s Fence

    In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

    In other words, if you want to get rid of something, you ought to understand why it was there in the first place, to make sure you don’t throw away something that was serving a useful purpose of which you were ignorant (which would be hubris).

  61. Joshua says:

    Anyone know what happened to Tom Curtis? Is he still kicking around at Skeptical Science?

  62. Dave_Geologist says:

    EFS, the other issue with the PETM is that there remains considerable uncertainty (disagreement) about the actual CO2 level, and about the contribution (if any) of massive CH4 release. Which even though it will oxidise to CO2, could kick the planet over a tipping point before it has time to oxidise. For example, if you combine degassing of petroleum source rocks due to North Atlantic Igneous Province intrusions with thawing of Antarctic permafrost, degassing CH4, even if only for a few centuries, could tip the Antarctic over the edge, whereas the same amount of carbon, oxidised in the subsurface to CO2 before degassing, might not.

    This recent paper finds a higher CO2 level, which brings the climate sensitivity more into line. Multiple Proxy Estimates of Atmospheric CO2 From an Early Paleocene Rainforest.
    Here we compare four different CO2 proxy methods using plant fossils from an exceptionally diverse rainforest that existed near present-day Denver, Colorado, 63.8 million years ago. Estimates are largely congruent and higher than previously thought (~600 ppm). The higher CO2 levels during this warm period are in better agreement with the current understanding of long-term Earth system climate sensitivity, and results from the newer gas-exchange proxy methods paint a coherent picture of Earth system sensitivity evolution over the Cenozoic.

  63. DM said:

    ” Of course there is a game theory aspect here, but it isn’t in the science, it is in the politics and economics which depend on the behaviour of people, rather than climate physics (or biology) which doesn’t. This is why politics and economics are more difficult than physics and biology.”

    Well stated.

    BTW, what did scientists do before an economist come up with the concept of Granger causality in the 1960’s? Perhaps they thought long and hard about a problem, understanding why it was like it is in the first place, and ultimately found a solution in which others were ignorant. So for example, they noticed that the daily tidal cycles were linked to the phases of the moon and then built up their models from there. What was that process called? The directionality could be easily inferred with a lead or lag applied.

    I’ve a feeling Granger causality was dreamed up because economists really struggle to model human “gaming” of markets. They try to find any subtle correlations they can with other time-series and use that to help validate their model(s). Unfortunately, this may not lead to a root cause as the “helper” correlated time-series is just as mysterious, even if directionality can be inferred.

    It does remind me of climate indices and the concept of teleconnections. Many research papers claim causality of one climate index from another without ever getting to an overall root cause. I have found several references to one climate time-series to be “Granger causal” to another. The one I found a few years ago which is clearly evident IMO is the SOI time-series leading the MJO pentad time-series by ~21 days. https://geoenergymath.com/2020/02/21/the-mjo/

    So does ENSO cause MJO ? Likely https://www.climate.gov/news-features/blogs/enso/catch-wave-how-waves-mjo-and-enso-impact-us-rainfall

    “The S.S. ENSO cruise ship and MJO speedboat making their way across the “harbor” of the tropical Pacific Ocean. The cruise ship represents the stationary ENSO pattern creating steady, rolling waves. The speedboat represents the rapidly moving MJO travelling through the waves created by the S.S. ENSO, altering the wake that the MJO speedboat is producing.”

  64. Joshua,
    No, Tom isn’t kicking around at Skeptical Science. I think he didn’t like the atomic bomb analogy that was being used a a number of years ago and parted ways at that stage.

  65. dikranmarsupial says:

    Great pity, Tom made some excellent contributions at SkS (especially Climate Change Cluedo: Anthropogenic CO2).

  66. Joshua says:

    Yeah. Tom is a great commenter. Haven’t seen anything from him anywhere for quite a while.

  67. Willard says:

    I congratulated Nic for his uninformative paper:

    https://judithcurry.com/2022/09/20/important-new-paper-challenges-ipccs-claims-about-climate-sensitivity/#comment-980467

    The word uninformative is strictly conventional here.

  68. dikranmarsupial says:

    I think a mildly informative prior on the number of occurrences of “objective” (an it’s congnates) would be more appropriate than a Jeffreys’ prior ;o)

    The idea that it is the prior that is subjective or objective, rather than the definition of probability that is used throughout seems … interesting.

  69. Dave_Geologist says:

    One of the issues with using biological proxies like leaf stomata in deep time is that if a warmer climate (for example, the Eocene) was maintained for a long time, you’d expect natural selection to adapt lifeforms to that climate. There is probably a cost to changing stomatal size in response to secular change in the environment, e.g. you might be stuck at one end of the range with no flexibility beyond that, but nevertheless it’s your least bad option. If the environment stays changed lifeforms which are naturally suited to it and don’t have to adapt should have an advantage.

    That doesn’t matter for the Hockey Stick because the timeframe is too short, and probably not for the whole Quaternary, because we were flipping in and out of Ice Ages too quickly and if anything, selection would be for adaptability.

    That’s somewhat related to the late-19th-century Baldwin’s Rule, the idea that adaptability lets a species tolerate a changed, unsuitable environment (rather than going extinct) long enough for natural selection to work on the tail of the distribution and make it the centre, by culling the original centre and the other tail. It’s been invoked for tool use in New Caledonian crows. The parent population in SE Asia learns to use tools by young birds watching experienced birds, especially parents. But in New Caledonia the tool use is hard-wired: eggs hatched and brought up in the absence of adult tool users nevertheless develop the same tool use as they mature.

    To work mathematically ( 😉 )there needs to be a cost to the adaptation. Baldwin was a psychologist interested in animal learning as a model for human learning. For him the cost was obvious: hard-wired individuals would have an advantage over learners while the learners were still learning. Some learners die before they’ve had time to learn. So over time the hard-wired proportion increases and the learner proportion decreases, until the adaptation becomes fixed.

    There has been much debate about extending it to other traits and fitness landscapes. Some sterile and definitional, like the debates about sexual selection where some say that things Darwin hadn’t thought of or observed don’t count because you have to rigidly stick to Darwin’s definition. The stomatal example would be that individuals whose gas exchange was just right and didn’t have to open or close their pores (or rather, who opened and closed them around a central optimum) would have an advantage in the warmer world, but in order for the population to survive until they’d been selected, enough flexibility was required for their ancestors not to die out.

    Numerical models, often Monte Carlo, have shown that problems which were intractable to closed-form solutions or simple thought-experiments are tractable. People had disagreed because depending on what simplifications they made to make the problem tractable, they got opposite answers. There is now a literature which shows when the rule applies, when it doesn’t, and when it applies in reverse and hinders evolutionary adaptation. An example of the latter is a vertical step in the fitness landscape, where only a very rare freak mutant can survive inflexibly on the new peak (and probably won’t be joined by another it can breed with), and only adaptable individuals which can survive either side of the cliff are selected.

  70. Willard says:

    Perfect example, Dave.

    I think it is important to realize that complexity theory is about solving games, so the concept of strategy is different than usually understood. The game of Chess is intractable. We have Chess engines that beat routinely the best carbon life forms in the world. Yet we have no algorithm that can make Twitch streamers fancy a theory as to how it would be possible to cheat at over-the-board Chess with a sex toy:

    https://www.theguardian.com/sport/2022/sep/09/chess-hans-niemann-hits-back-over-cheating-controversy-in-st-louis

  71. Dave_Geologist says:

    I couldn’t work out how he was supposed to have cheated Willard, until I saw an analysis that said, yes, 80% of his moves matched the top Chess AI, but if you excluded forced moves and obvious moves that even a novice would make, it fell to well below 50%. So presumably the accusation there was of a confederate communicating wirelessly the computer moves running in parallel. This one is somewhat different. Maybe Carlsen let slip his surprise-move plan when he’d had one too many in the bar the night before?

    Back to Monte Carlo. Some may be aware of the famous-for-decades McClintock paper about women in dorms subconsciously synchronising menstrual cycles. Repeated attempts to replicate it failed to find any synchronisation. She had not documented her data perfectly, and of course it was the 1960s so none of it was digital, but some enterprising soul replicated it sufficiently to re-analyse it using modern numerical methods.

    The intractability there and lack of a closed-form solution was to do with her statistical significance tests. From memory, it was tractable for same-length cycles with different start dates, and for different-length cycles with the same start date, but not for the real-world situation where both were different. She did both tractable tests and it came out hugely significant, and so she, her supervisor, her external examiners and the editors and peer reviewers at Nature persuaded themselves it was such a slam-dunk that it must be significant in the intractable full test.

    The full test was tractable by Monte Carlo simulation (1000 trials, do more than 50 match her observations, or whatever), and sure enough, it came out as not significant.

    Bizarrely I first came across it in a completely different context, the use of 3-D shape curvature in geology to predict natural fracturing, and what do you do at the edge of the map? The underlying maths turns out to be the same.

  72. Willard says:

    80% is actually a low number, Dave. Top Grandmasters routinely play 90% top moves. That specific guy is starting to be known as some kind of Mozart. Intuitive players are the worst to match to a chess engine, usually not an AI.

    The accusation rests mostly on the online past of that pour soul. And now he is getting the Serengeti treatment. It would be interesting to model the Serengeti treatment with distributed AI.

    Which makes me think of a simple way to make your point: ant colony optimization algorithms solve the travelling salesman routinely. That is a NP hard problem. Heck, looks like can they even find many solutions to the Knight Tour problem:

    Ants which attempt to find later tours are more likely to follow higher levels of pheromone. This means that they are more likely to make the same moves as previously successful ants.

    There is a balance to be struck. If the ants follow the successful ants too rigidly, then the algorithm will quickly converge to a single tour. If we encourage the ants too much, not to follow the pheromone of previous ants, then than they will just act randomly. So it is a case of tuning the algorithm’s parameters to try and find a good balance.

    Using this algorithm, we were able to find almost half a million tours. This was a significant improvement over previous work, which was based on a genetic algorithm. These algorithms emulate Charles Darwin’s principle of natural evolution – survival of the fittest. Fitter members (those that perform well on the problem at hand) of a simulated population survive and weaker members die off.

    https://phys.org/news/2014-01-ants-chess-problem.html

  73. Steven Mosher says:

    The situation is much like climate change, imo, and as such it seems to me that there’s a point where you just look at the whole thing as rather unmodelable – and thus you approach it more as a question of risk management where you have low probability but high damage risk.

    heres thething

    its always modelable. that is humans will always apply a model.

    since i hang out in sceptic ville i see it up close.

    they reacted to covid with the “flu” model. its just the flu, or the “cold” model

    its just a cold. then they responded with…. “its a deep state plot” model.

    the nice thing about mathematical modelling is you generally make your assumptions

    obvious. and you get “predictions

  74. russellseitz says:

    Condolences to Willard and us all:
    A real Grandmaster last week:

    https://vvattsupwiththat.blogspot.com/2022/09/is-climate-crisis-rigid-designator-in.html

  75. Dave_Geologist says:

    Willard, the claim being made is that he doesn’t play every AI move because that would be obvious, and even 20% of moves could make a difference at the right stage in the match. But I suspect he’s just branded by his youthful indiscretions.

    Don’t even needs ants for that problem. Slime molds can solve it.

    The Blob: A Genius without a Brain. (Available for five days, probably UK only.)

    Of course both use iterative learning from initially random moves, the living version of a Monte Carlo solution or machine learning.

  76. Willard says:

    Dave, a Chess engine is no AI, but an evaluation function with classic pruning.

    The main problematic move in the story is one that has already been played, including by Magnus I believe. It would count as an obvious move. 80% would be high in rapid Chess, it is a low number in classic Chess. Magnus himself scores around 98% with the Chess dot com tool.

    An engine only needs to find on average one or two super moves to exploit an advantage that would translate into full domination. Chess has become very, very accurate. One stupid mistake and you lose. Most mistakes are made in lost positions.

    Cool paper about the amoeba!

  77. Dave_Geologist says:

    Physarum Machines: Computers from Slime Mould

    Physarum solver: A biologically inspired method of road-network navigation

    We have proposed a mathematical model [ 😉 ]for the adaptive dynamics of the transport network in an amoeba-like organism, the true slime mold Physarum polycephalum. The model is based on physiological observations of this species, but can also be used for path-finding in the complicated networks of mazes and road maps. In this paper, we describe the physiological basis and the formulation of the model, as well as the results of simulations of some complicated networks. The path-finding method used by Physarum is a good example of cellular computation.

    Some references.

    Recent references.

  78. Joshua says:

    Steven –

    > its always modelable. that is humans will always apply a model.

    Agreed. We necessarily model with we seek understanding. Thanks for the reminder.

    So what do I really mean when I say to treat it as essentially unmodelable? I have to think about that.

  79. dikranmarsupial says:

    Joshua “So what do I really mean when I say to treat it as essentially unmodelable?”

    modelable doesn’t necessarily mean predictable? e.g. we can’t predict climate states even though climate is predictable (ask Lindzen ;o)?

  80. angech says:

    Willard says:
    “Dave, a Chess engine is no AI, but an evaluation function with classic pruning.”
    Leads to a good and bad problem.
    All algorithms have a weak point somewhere.
    If you can find that weak point against an AI you can play the same game and win every time.
    Mind you finding the weak point may take a multitude of games.

  81. Joshua says:

    angech –

    > If you can find that weak point against an AI you can play the same game and win every time.
    Mind you finding the weak point may take a multitude of games

    Lol. How many games did it take you to beat Deep Blue every time?

  82. Willard says:

    > All algorithms have a weak point somewhere.

    No. Here is a short list of solved games:

    http://webdocs.cs.ualberta.ca/~chinook/games/

    I don’t think it’s up-to-date.

    ***

    > you can play the same game and win every time.

    For Chess, your strategy would work if the engine did not follow a book of openings at the beginning. The way this is implemented allows for variations. For instance, in the same position, Stockfish might go for one move 70% of the times, and another 20%, and a third 10%.

    Also, the opening book can change. This allows programmers to patch their opening weaknesses. In a match, this is important.

  83. angech says:

    Joshua says: September 25, 2022 at 3:28 pm angech –
    Lol. How many games did it take you to beat Deep Blue every time?

    Is it available for an average player to access?
    If I was to try I would try to get a copy of the winning move sequences grandmasters have already played with and won as a starter.
    And no,
    While I have tickets on myself I am a very basic but keen player for over 50 years and would have a lot of trouble beating a mere 1800 player at my very best.

  84. russellseitz says:

    Willard, three New Zealan climate philosophy blokes have come up with a radical new ClimateBall opening move:

    https://vvattsupwiththat.blogspot.com/2022/09/think-of-all-money-it-will-save-on.html

  85. Willard says:

    Brilliant. I would suggest that climate scientists could follow up on their work if and only if world leaders succeed a reading comprehension exam on the previous reports.

  86. I’m not a mathematical modeler; I’m an immunologist. But the pandemic showed me that if someone is going to use a mathematical model, then they should know the meaning of terms central to that model.

    For example, some non-experts tried to model “herd immunity”, a term from my field of expertise, without actually knowing what that term means. They treated herd immunity as occurring whenever SARS-CoV-2 cases/day or deaths/day decreased, when anyone with even a basic understanding immunology or epidemiology knows factors other than herd immunity can cause cases/day or deaths/day to decrease. Moreover, herd immunity is defined under baseline conditions where people are acting as they usually do for that time of the year (ex: during the same time of year in 2019) to avoid incorrectly attributing to herd immunity the impacts of voluntary behavior changes (ex: people voluntarily going out less), government policies involuntarily changing behavior (ex: lockdowns), non-pharmaceutical interventions such as increased mask-wearing, etc.

    Experts in modelling and the meaning of immunological/epidemiological terms, like Neil Ferguson, understood this just fine. But non-experts like Nic Lewis didn’t, leading to those non-experts claiming herd immunity at implausibly low infection rates, in ways that incorrectly minimized the risk of COVID-19 and underplayed the impact of government policy in a way that conveniently suited the non-experts’ ideology. And beyond this, they also willfully ignored factors that could push the herd immunity threshold higher or make it less feasible to reach, such as SARS-CoV-2 mutating to a form that re-infects people because previously infected people are not immune to said variant. They ignored those factors despite people with more expertise explaining those factors to them by at least May 2020, leading to these non-experts greatly underestimating COVID-19 deaths in Sweden (including Stockholm), India, New York City, Geneva, London, etc. The non-experts also underestimated COVID-19 deaths on the Diamond Princess because they messed up on basic epidemiological concepts like “right censoring”. And so on.

    The parallels to ideological, non-expert contrarianism on climate science are obvious.

    Sources on this below, for the curious:

    https://archive.ph/Xjyec#selection-15437.0-15455.263
    View at Medium.com

    “In the (unlikely) absence of any control measures or spontaneous changes in individual behaviour […]”

    Click to access 2020-03-16-COVID19-Report-9.pdf

    https://archive.ph/WK9ic#selection-299.0-307.242
    [14 deaths: https://www.science.org/doi/10.1126/science.abd4246 ]
    https://archive.ph/Qunlx#selection-397.28-401.111
    https://archive.ph/r9JUJ#selection-4875.154-4875.319
    https://archive.ph/yx2Nh#selection-2177.0-2177.358
    https://archive.ph/oQ8SB#selection-46711.0-46713.29
    https://archive.ph/DY3z0#selection-227.0-227.309

  87. Atomsk,
    Yes, those are good points. I do think it’s good for others to get involved in investigating these various topics, but it’s also important for people to try and understand the standard terminology and to correct themselves when they get something wrong.

    I do find the way that Nic Lewis frames things to be very irritating. Scientific research is a process and there often isn’t an objectively correct way to determine what assumptioms are reasonable and what aren’t. Or, maybe more correctly, there may be a range of assumptions that are all defensible. He, however, seems to think that the goal is to convince people that his assumptions are the objectively correct ones, and that those of others are somehow wrong. I don’t find this very constructive.

  88. I think the framing problem is pretty common. We all start with our universe of touchstones, things like evidence we accept as true, open questions and belief systems and our individual rhetorical skills. We proceed into the fray with a framing that is consistent with those things. If people are not open to adding to or reconsidering their touchstones, the discussion hardens into classic adversarial stuff and generally avoids much deep exploration of the topic at hand. The discussion can descend into something that looks like reactive tit for tat exchanges. That seems like an unfortunate dead end. It is not a deadend if parties are willing to revisit their own touchstones and be open to learning or accepting something new to their universe.

  89. verytallguy says:

    Atomsk,

    The covid threads at Judith’s were very instructive.

    As are current threads from self appointed Galileos.

    I seem to be banned from there now, which is, on the whole, a Good Thing.

  90. Joshua says:

    The problem wasn’t merely that Nic’s COVID modeling was way off.

    His focus on modeling a heterogenous variable of spread of infections seemed interestinf, even if not as much of a genius innovation as it was portrayed.

    Yes, it seems there was a problem with his basic understaninf of the terminology, but I don’t think he necessarily would have defined “herd immunity” as any state where there was any decrease in infection rate. When he predicted a “herd immunity threshold,” he meant to argue that the population infection percentage was at a level where a person without any immunity encountering an infectious person was unlikely.

    The problem is that he looked at emergent phenomena, conflated signal and noise, and claimed that his modeling was verified. Of course he was wrong about that.

    It was obvious at the time that there was the potential that he was conflating signal and noise, and that he was wrongly claiming that his modeling was verified – because of the myriad confounding variables he failed to account for.

    Of course, he even went so far as to basically invent a casual mechanism to explain his “motivated” confirmation for reaching a “HIT” at population infection levels even below 20%: he argued that T cell immunity (largely resulting from precious infections with other coronaviruses) would protect against infection. He didn’t allow his lack of experience in researchjnf Virology get in the way of his theorizing – despite that people who did have such experience were clear in saying that T cell immunity against infection (as compared to against sever infection) was unlikely.

    So then the next question is whether that tendency to wrongly see confirmation of his model is a more generalizable tendency – perhaps with the influence of ideological “motivated reasoning.”

    There’s no reason to assume it would be – but neither is there a reason to dismiss that possibility.

    The unfortunate aspect of all this is the reflexive tendency among some “skeptics” to just dismiss Nic’s problematic COVID modeling and/or just dismiss the errors he made.

    Sadly, Judith is one of those “skeptics.”

  91. Joshua says:

    And just because Judith won’t allow me to post the following at her site (CENSORSHIP! WHY DOES SHE HATE FREE SPEECH!?!?)

    I will also point out that in June of 2020, Nic agreed with Willis that there would likely be on the order of another 160k deaths in the 11 countries examined in Flaxman et al., as a hard upper limit.

    No doubt, Nic agreed with that number based on his conceptualization of what “herd immuniry” is, and the dynamics of how it would be reached with COVID.

    They were only off by a (approximately) trifling 450% (and counting) – depending on how you assess the comprehensiveness and accuracy of the count at Worldometers – my own guess is that while some factors suggest that it is an overcount, others suggest it is an undercount – perhaps balancing out? At any rate, Nic and Willis’ prediction was based on those figures).

  92. Joshua says:

    angech –

    Also, since you’re commenting on this thread, do you have anything to say about the views you expressed regarding Nic’s COVID modeling?

  93. Willard says:

    Speaking of framing, my modest proposal to Nic was to use “uninformed” instead of “objective”:

    I agree with you – Nic has updated his work in the most uninformed way possible.

    You should try to ask The B Company to run their simulations with that kind of uninformed toolkit. If you can pull it off, please make sure to keep us uninformed.

    https://judithcurry.com/2022/09/20/important-new-paper-challenges-ipccs-claims-about-climate-sensitivity/#comment-980508

    Since these labels do not matter, I await with baited breath his new terminology usage.

  94. Joshua,
    There are a number of issues with how Nic presents his work. It’s wildly over-confident, despite there being clear indications that he’s regularly wrong. I do think it’s good to challenge the prevaling view, but it’s also good to recognise that it’s not good to base decisions on one person’s work (even if it’s your own). Nic does policy-relevant work and it seems clear that he’s doing it for the policy-relevance and is quite comfortable to try and influence policy. He’s been involved in parliamentary enquires for climate change, and I’m aware that some of the UK committees were aware of his Covid work (although I don’t think it was taken seriously).

    There’s nothing wrong, of course, with providing advice to policy makers, but – ideally – scientists should aim to provide the scientific community’s besy understanding, rather than – as Nic does – present his work as objectively the best and everyone else’s as being fundamentally flawed (I may exaggerate a bit, for effect, but not much).

  95. Joshua says:

    Anders –

    > Nic does policy-relevant work…

    Therein lies a problem. Let’s imagine a universe where working towards a common goal was a shared objective. In such a world, there would have to be an agreement about what is or isn’t policy advocacy.

    Back in the real world, as you well know, most “skeptics” will insist that someone like Nic’s science is merely a focus on “truth” as distinguished (in their view) from those whose climate science work they disagree with. Hence Willard’s point about “objective priors.”

    This seems like a fundamental obstacle. In the non-idealized universe, both sides inherently see the science on the other side as being rooted in biased advocacy. And for the most part neither will acknowledge the potential of bias to influence their view of the science.

    How could that critical obstacle be overcome?

    So here,

    >…rather than – as Nic does – present his work as objectively the best and everyone else’s as being fundamentally flawed (I may exaggerate a bit, for effect, but not much).

    Through the “skeptics” lens, it is the “mainstream” outlook that is being presented as objectively best with everyone else’s being fundamentally flawed. Without working backwards from a viewpoint on which side is correct about that, how is that fundamental rift overcome? What would the model be?

    I don’t know if. you listened to or read that Ezra Klein podcast I linked upstairs. I was favorably impressed by the interviewee; in the end it felt somewhat hopeful – as he stressed a positive sum rather than zero sum orientation, IMO. The focus is more on resolving infrastructure needs that will exist somewhat independently of how we view the immediate impact of ACO2 emissions, and the economic benefits from addressing those needs.

  96. Joshua,

    Through the “skeptics” lens, it is the “mainstream” outlook that is being presented as objectively best with everyone else’s being fundamentally flawed. Without working backwards from a viewpoint on which side is correct about that, how is that fundamental rift overcome? What would the model be?

    I guess my view is also that Nic’s work is still part of the mainstream, even if he tries to pretend that it is somehow objectively better than everyone else’s work. You could argue that the public narrative that Nic should present is something along the lines of “my work suggests that there is a reasonable chance of equilibrium climate sensitivity being less than 2K, but there is a lot of other work that suggests that it probably isn’t”. Similarly, others could do the reverse – “a lot of work suggests that the ECS is probably above 2K with a best estimate around 3K. There is some work, however, that suggests a reasonable chance of being below 2K.”

  97. I didn’t listen to the Ezra Klein podcast with Jesse Jenkins, but I do mean to. Jesse Jenkins does seem reasonably sensible, from what I’ve seen.

  98. russellseitz says:

    ATTP

    Dart impacts in a few minutes. It will be interesting to see if detectable Wigner energy release is triggered by the impact – the target asteroid has been soaking uo radiation damage for 4 Ga at <50 K

  99. Re: Joshua

    Judith Curry’s defense of Nic Lewis is that he was less wrong than Anthony Fauci. That is absurd, and I say that knowing Fauci is a much better immunologist than I am. For instance, at around the time Lewis peddled incorrectly low estimates for how many people would need to be infected to achieve herd immunity, Fauci was making points important to mathematical modelling of herd immunity, such as:

    – giving estimates that were more accurate, and at least more than 3 times larger than Lewis’ estimates of needed infection rates
    – pointing out factors that could make herd immunity harder to achieve, like vaccine denialism / vaccine hesitancy and infection-induced immunity waning (to the point of people possibly being re-infected)

    I don’t think Fauci was perfect during the pandemic. But he didn’t need to be perfect, unless one commits the nirvana fallacy. As an expert, Fauci did a commendable job and much better than the non-expert Nic Lewis, even if some folks find that fact inconvenient for their non-expert narrative. Maybe folks should treat expert criticism more seriously when they try to engage in modelling in fields in which they lack expertise?

    It’s especially ironic that in October 2020 Lewis said evidence against “supposed scientific experts” was building, only for Lewis to have to admit 3 months later that it was he who was wrong, in the face of large COVID-19 waves in places he incorrectly claimed had achieved herd immunity before those large waves. Of course, he then went on to repeat that same distortion for India before its massive delta wave, even after I corrected him on that, as if he still hadn’t learned his mathematical modelling was wrong. Oh well.

    Sources on this below:

    https://archive.ph/jxwzX#selection-1719.59-1719.134
    https://archive.ph/iTN29#selection-17237.0-17257.354
    https://archive.ph/DY3z0#selection-227.0-227.309
    https://archive.ph/oQ8SB#selection-46711.0-46713.29

    https://jamanetwork.com/journals/jama/fullarticle/2767208
    https://www.theguardian.com/world/2020/jun/29/fauci-us-unlikely-achieve-herd-immunity-coronavirus-even-with-vaccine
    https://time.com/5825386/herd-immunity-coronavirus-covid-19/

    https://en.wikipedia.org/wiki/Nirvana_fallacy

  100. angech says:

    angech (Comment #215038)
    September 26th, 2022 at 8:27 pm Edit Delete

    Joshua
    I found angech | July 27, 2020 at 7:02 pm I do not think I wrote on his first post though lots of people here did.
    Why herd immunity to COVID-19 is reached much earlier than thought – update
    Posted on July 27, 2020 by niclewis

    “A key reason for variability in susceptibility to COVID-19 given exposure to the SARS-CoV-2 virus causing is that the immune systems of a substantial proportion (35% to 80%) of unexposed individuals have T-cells, circulating antibodies or other components that are cross-reactive to SARS-CoV-2

    I said
    Sorry, it is just not so.
    -35-80% sounds like an estimate of ECS, so broad it is meaningless.
    Multiple different unprovable contributing factors puts the fudge on top.

    HIT is not a fixed number, the threshold varies seasonally and with the specifics of the infecting agent. The troubling fact that young children find it hard to catch shows that for this virus the other factor is the presence of ACE 2 receptors ? meaning that the load of virus needed to infect could vary with how many receptors you have ( more as you get older).
    This may be much more important than speculation about T cells which does not transfer from the Petrie Dish to real life.

    He was not happy with me.

  101. Steven Mosher says:

    “I’m not a mathematical modeler; I’m an immunologist. But the pandemic showed me that if someone is going to use a mathematical model, then they should know the meaning of terms central to that model.”

    theres a funny scene. its all just numbers

    just ask my quant

    nothing beats having a SME in your modelling group who understands the nouns
    and verbs of the subject area

  102. Steven Mosher says:

    Dave, a Chess engine is no AI, but an evaluation function with classic pruning.

    huh?

    1. when we did mission planning for tacit rainbow
    https://en.wikipedia.org/wiki/AGM-136_Tacit_Rainbow

    its aAI was nothing more than a cost function and pruning

    or rather A* with a cost function/ evaluation function.

    thats no AI its just a logistic function.

    when people are presented with complex animal behavior they imagine all sorts of complicated hidden
    processes

    https://en.wikipedia.org/wiki/Braitenberg_vehicle

  103. Willard says:

    Here, Mosh:

    > Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games (Tic-tac-toe, Chess, Connect 4, etc.). It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.

    https://en.wikipedia.org/wiki/Alpha–beta_pruning

    That is no AI like Alpha Chess.

    You can look for yourself:

    https://github.com/official-stockfish/Stockfish

  104. Willard says:

    I might as well add, in fairness, that Stockfish can indeed be empowered with AI:

    A note on classical evaluation versus NNUE evaluation

    Both approaches assign a value to a position that is used in alpha-beta (PVS) search to find the best move. The classical evaluation computes this value as a function of various chess concepts, handcrafted by experts, tested and tuned using fishtest. The NNUE evaluation computes this value with a neural network based on basic inputs (e.g. piece positions only). The network is optimized and trained on the evaluations of millions of positions at moderate search depth.

    That the engine can tune its own evaluation function is critical for adaptive behavior.

    I forgot to mention that because Stockfish is already quite strong out of the box.

  105. Joshua says:

    angech –

    Time for accountability.

    From October14, 2020

    His argument was over and he won months ago when he argued that a rate somewhere between 17% and 19% would most likely guarantee herd immunity in the Swedish setting only.

    https://judithcurry.com/2020/09/22/herd-immunity-to-covid-19-and-pre-existing-immune-responses/#comment-929120

    From October 14, 2020

    And, of course, he never argued “in the Swedish setting only.” You just made that part up.

  106. dikranmarsupial says:

    Angech: “I do not know of any commentator or blogger who does not cherry pick points to point out flaws in arguments of others. Both sides do it incessantly.”

    A good example of an inappropriate prior ;o)

    You do know commentators/bloggers who don’t cherry pick, it is just that your prior belief that they do is so strong you can’t accept the evidence when you see it.

  107. dikranmarsupial says:

    Unfortunately AI has expanded in scope since the start of the deep learning era to include more emphasis on non-symbolic learning systems (which don’t learn algorithms per se). There used to be a nice division between AI being more symbolic reasoning and machine learning being more connectionist/statistical methods. I suspect AI caught the deep learning bandwagon as a way of distinguishing it from “old hat” connectionism. A bit like the drift in meaning of “business as usual”?

  108. Joshua says:

    angech –

    And the key is that Nic said that Sweden reached “herd immunity” in May? of 2020 because the rate of spread there was clearly distinguishable from the rate of spread in the other Nordic countries – because of their “let it rip” policies that accelerated the progress to reaching a HIT.

    I said there was too much uncertainty to draw such a conclusion.

    You said it was certain he was correct.

    The point isn’t that I was right and you were wrong.

    The point is the importance of accountability, and for facing the reasons why such errors occur.

    Too bad you will duck accountability. Am I too certain of that? Time will tell.

  109. Re: Joshua

    Joshua said:
    “Yes, it seems there was a problem with his basic understaninf of the terminology, but I don’t think he necessarily would have defined “herd immunity” as any state where there was any decrease in infection rate. When he predicted a “herd immunity threshold,” he meant to argue that the population infection percentage was at a level where a person without any immunity encountering an infectious person was unlikely.”

    Lewis’ mathematical modelling of herd immunity used R0 decreasing to at or below 1. That means infections/day not increasing even under *baseline conditions* where there are no additional behavior changes, no additional non-pharmaceutical interventions, etc. (ex: under the conditions for the same time of year in 2019). So that account of herd immunity is not just “population infection percentage was at a level where a person without any immunity encountering an infectious person was unlikely”. If Lewis instead meant what Joshua said in that quote, then Lewis is again showing he doesn’t understand what R0 is and by extension doesn’t understand what the herd immunity threshold is in the mathematical modelling he performed.

    As I went over in my previous 2 comments above, Lewis insinuated Stockholm, Sweden overall, New York City, London, Geneva, India, etc. achieved herd immunity simply because their SARS-CoV-2 cases/day, hospitalizations/day, or deaths/day decreased. That was all the evidence he had for claiming they achieved herd immunity. But additional behavior changes, non-pharmaceutical interventions, etc. can cause cases/day, hospitalizations/day, or deaths/day to decrease. Since those locations were not under baseline conditions for those factors, that means Lewis incorrectly attributed to herd immunity the impact of those factors, just like the incorrect Gomes et al. mathematical modelling outlier he tried to follow.

    In other words: Lewis incorrectly treated a *necessary* condition for herd immunity as being *sufficient* evidence of herd immunity. Hence why massive waves of SARS-CoV-2 infection happened in areas he previously claimed achieved herd immunity, contrary to his predictions but just as experts in modelling, immunology, and epidemiology predicted in the literature and elsewhere for places that did not achieve herd immunity. Lewis was not making a meaningful contribution to mathematical modeling of the pandemic; domain experts were.

    Further background:

    https://archive.ph/usLdI#selection-287.0-295.188
    https://archive.ph/39jUm#selection-54127.0-54297.66
    https://archive.ph/DY3z0#selection-227.0-227.309

    https://archive.ph/PNfSO#selection-4585.0-4949.495
    https://archive.ph/R03HP#selection-7939.3-7947.71
    https://archive.ph/cH2EY#selection-13385.2-13385.156
    https://archive.ph/q2ldq#selection-1133.171-1133.516
    [with: https://archive.ph/2NJIk#selection-1607.0-1623.302 ]
    https://archive.ph/ldz1f#selection-3099.344-3121.2
    https://archive.ph/jCqMg#selection-1109.0-1113.389
    https://archive.ph/NeXwl#selection-4771.187-4771.553
    https://archive.ph/PLupI#selection-963.0-1125.101

  110. Dave_Geologist says:

    surely everyone realises by now

    Oh, Adam.

    If wishes were fishes…

  111. VTG said:

    “As are current threads from self appointed Galileos.”

    Can a machine learning result be accused of acting as an automated Galileo?

    Three discoveries reported. One by a computer scientist producing an ML result. One by an applied mathematician dabbling well outside his area of expertise. One by a totally green grad student operating within the discipline. How are they ranked in terms of credibility?

    AI often suffers from the “closed-world assumption”. How many emergent discoveries in AI come about from knowledgebases or symbolic algorithms that opened up beyond the expected scope of the discipline they were exploring? Similar to how insight is provided by a scientist outside their discipline, or a budding scientist that doesn’t know any better.

  112. Willard says:

    Here would be an artificial Einstein:

    Two scientists realize that the very same AI technology they have developed to discover medicines for rare diseases can also discover the most potent chemical weapons known to humankind. Inadvertently opening the Pandora’s Box of WMDs. What should they do now?

    https://radiolab.org/episodes/40000-recipes-murder

    For the artificial Galileo:

    Aleksandra Przegalinska says the Kremlin is using deep fakes — fabricated media made by AI. A form of machine learning called “deep learning” can put together very realistic-looking pictures, audios, and in this case, videos that are often intended to deceive.

    https://globalnews.ca/news/8716443/russia-artificial-intelligence-deep-fakes-propaganda-war/

  113. Joshua says:

    Atomsk –

    I think we’re disagreeing about something, but I’m not quite sure what it is.

    Here’s my basic take on it. All with the caveat that I have no idea what I’m talking about and just spitballing.

    Nic, I suspect largely motivated (at least indirectly) by ideological perspective, wanted to try modeling the course of the pandemic with adding in heterogeneity for infection spread. Not a bad idea, it seems to me. Logical. But of course, it should go alone with basic background knowledge, awareness of the existing literature, and consideration that any findings would be provisional, speculative, and likely wrong due to the many un-modeled yet important factors, not the least of which would be behavioral or demographic variables that would vary by climate.

    So Nic put together his model and it resulted in “herd immunity” at a lower population % than is standard in the literature. Ok, so that’s interesting. Assuming similar control for things like behavioral or demographic variables in modeling that. doesn’t include heterogeneity, it’s an interesting finding when it’s compared to modeling that uses a homogeneous model.

    Given that he didn’t actually know what he was doing, when at a particular point he saw a flattening of infections in Sweden that wasn’t seen in other countries, in particular other Nordic countries, he drew a straight line between his impression of what Swedish policy was – relatively less restrictive policies governing citizens behavior, and attributed that difference to Sweden’s policies. He specifically referenced how people in Sweden were engaging in normal behaviors, and yet there was a drop – which in a sense would mean that his view was that “herd immunity” would only be reached at the point that behaviors had returned back to normal. Of course, he was vague enough and his posts varied enough that it was never entirely clear what his conceptualization of “herd immunity” was – an in all fairness, from what I saw there was good deal of variability in how the term was being used by experts in the public sphere.

    The key here, for me, is that Nic did the modeling without control for key variables (which was obvious at the time) and then when he saw a certain signal in the numbers, was ready to conclude that his modeling was, in fact, correct. In reality, his understanding of the behavioral components, let alone the many other important variables, was woefully inadequate. He assumed “normal” behaviors in Sweden would be paralleled in other communities, even though during that period many Swedes went on vacation to very remote regions, and Swedes in general had the ability to work from home more easily than people in other countries, and Swedes tend to have a much higher prevalence of small households than other countries, etc. (there’s a long list).

    He even basically invented a casual mechanism to explain his findings – that more people were “immune” (to infection) than the number who were infected because of t cell immunity from other viruses. That’s how strong was his “motivation” to confirm that his modeling was correct.

    So in the end, he was so certain in his beliefs, that he found confirmation in a signal even though he failed to really understand the dynamics. Then he assumed his modeling was correct, and so felt that he could do a similar reverse-engineering in other countries where the infection rate dropped despite low population infection %’s.

    It’s all are rather incriminating picture of his reasoning and approach to science in that context. Can we generalize from that context to others? Maybe. It’s hard to say. But the fact that he’s never (to my knowledge) accepted how bad his scientific approach was to modeling COVID doesn’t, in my view, lessen the chances that his approach is more generally problematic. And the free pass he gets from Judith and other “skeptics” just exposes how, when they claim to be about “truth” in science, they’re just manifesting how motivated reasoning works – because they apply double standards in association with their favored viewpoint.

    Sorry for such a long comment – that’s how I am. I’m not entirely sure there’s much more to be said about this, but I just wanted to try to clarify. I hope I didn’t just make it even muddier.

  114. Re: Joshua

    I agree with much of what you’re saying, but the main issue is what you wrote here:

    “Nic, I suspect largely motivated (at least indirectly) by ideological perspective, wanted to try modeling the course of the pandemic with adding in heterogeneity for infection spread. […]
    So Nic put together his model and it resulted in “herd immunity” at a lower population % than is standard in the literature.”

    In his mathematical modelling, Nic Lewis didn’t first incorporate factors like heterogeneity and then have that lead him to a lower herd immunity threshold (HIT). He instead did what I noted before: he *assumed* HIT was reached simply because SARS-CoV-2 cases/day, hospitalizations/day, or deaths/day decreased. That incorrect assumption was all that gave him a low HIT and it shows he did understand what herd immunity is. He *then* introduced heterogeneity, the incorrect claims he invented about T cell immunology, etc. to suit his incorrect assumption. That likely has parallels to how he defends his claims of lower climate sensitivity. Here’s a clear example of this from his first May 2020 article on herd immunity modelling:

    “Very sensibly, the Swedish public health authority has surveyed the prevalence of infections by the SARS-COV-2 virus in Stockholm County, the earliest in Sweden hit by COVID-19. They thereby estimated that 17% of the population would have been infected by 11 April, rising to 25% by 1 May 2020.[5] Yet recorded new cases had stopped increasing by 11 April (Figure 1), as had net hospital admissions,[6] and both measures have fallen significantly since. That pattern indicates that the HIT had been reached by 11April, at which point only 17% of the population appear to have been infected.”
    https://archive.ph/usLdI#selection-287.0-295.188

    But as I noted before, Sweden overall and Stockholm in particular had additional behavior changes, non-pharmaceutical interventions, etc. beyond the baseline conditions of R0 used for calculating HIT (ex: under the conditions for the same time of year in 2019). Those factors contributed to SARS-CoV-2 cases/day, hospitalizations/day, and deaths/day decreasing. Lewis therefore incorrectly attributed to herd immunity the impact of those additional factors and thereby exaggerated the impact of factors he introduced such as heterogeneity, leading to him incorrectly predicting large SARS-CoV-2 waves would not then happen. When his predictions failed, he then invented a new set of false claims to prop up his false assumption, only for his revised position to then fail for India:

    “Initially, some local authorities and journalists described this as the herd immunity strategy: Sweden would do its best to protect the most vulnerable, but otherwise aim to see sufficient numbers of citizens become infected with the goal of achieving true infection-based herd immunity. By late March 2020, Sweden abandoned this strategy in favor of active interventions; most universities and high schools were closed to students, travel restrictions were put in place, work from home was encouraged, and bans on groups of more than 50 individuals were enacted. Far from achieving herd immunity, the seroprevalence in Stockholm, Sweden, was reported to be less than 8% in April 2020,7 which is comparable to several other cities (ie, Geneva, Switzerland,8 and Barcelona, Spain9).”
    https://archive.ph/ldz1f#selection-3099.344-3121.2
    [ https://jamanetwork.com/journals/jama/fullarticle/2772167 ]

    Nic Lewis on 10 January 2021:
    “Many people, myself included, thought that in the many regions where COVID-19 infections were consistently reducing during the summer, indicating that the applicable herd immunity threshold had apparently been crossed, it was unlikely that a major second wave would occur. This thinking has been proved wrong.”
    https://archive.ph/DY3z0#selection-227.0-227.309

    Nic Lewis on 6 February 2021 (before India’s massive SARS-CoV-2 delta wave):
    “No doubt the fact that the epidemic seems to be dying out in India despite there being relatively few restrictions enforced there and people’s behaviour having at least partially normalised won’t cause you to reconsider your position.”
    https://archive.ph/oQ8SB#selection-46711.0-46713.29

  115. Typo. For this:
    “That incorrect assumption was all that gave him a low HIT and it shows he did understand what herd immunity is.”

    I meant this:
    “That incorrect assumption was all that gave him a low HIT and it shows he did *not* understand what herd immunity is.”

  116. Joshua says:

    Atomsk –

    Thanks for the clarification. At this looojt I’ll go with the sequence you described, as I doubt you’d be wrong and it’s too tedious to check. I did go back and look at one of your links and came across gem – that I had forgotten about. It starts with him quoting my comment:

    > niclewis | January 11, 2021 at 3:50 am |

    >> “And here you ignore another uncertainty: The possibility that with a greater number of infections the greater the possibility of mutations that are vaccine resistant.”

    The mutations that are worrisome appear to have arisen not from an increased number of infections, but rather from well meaning (but it seems highly dangerous) health system actions that inadvertently acted rather like gain of function-like experiments carried out in non-biosafe hospitals, in the UK, Italy and quite likely South Africa.

    I had completely forgotten about that whopper. And I’m not sure that I’ve even seen that claim anywhere else, even in the rightwing COVID-o-sphere.

    It would be nice to see if he still believes that theory. Even if he doesn’t, I suspect that possibility of him being accountable for straight up immunological fantasizing is quite unlikely.

  117. Willard says:

    I liked this presentation by Yann LeCun:

    His team (he’s the Chief IA Scientist at Mark’s) succeeded in creating an agent that creates videos out of pure text:

    https://makeavideo.studio

  118. Steven Mosher says:

    joshua. i dont know how you bear going back to look at that stuff.

    modelling the spread of disease is a really cool problem.

    but you also have to know enough to stay away

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.