FLICC

I should probably be writing about the UK recording temperatures above 40oC for the first time, but it’s been covered pretty extensively elsewhere. Instead I thought I might briefly mention something that I’ve become more interested in and have been thinking about quite a lot recently. Even though I think science communication is important, I also think it’s becoming much more important to identify, and potentially counter, the spread of misinformation, or disinformation.

In that vein, I thought I would promote Skeptical Science’s FLICC taxonomy. These are techniques of science denial, and the acronym stands for Fake experts, Logical fallacies, Impossible expectations, Cherry-picking, and Conspiracy theories. I suspect many who have followed the climate debate can identify examples of each of these techniques.

For example, promoting the credentials of contrarians even though they don’t really have relevant expertise (F). Claiming that climate change today must be natural because it’s changed before (L). Demanding unrealistic levels of certainty before taking any action (I). Selecting a very short period of a dataset that appears to support your arguments, while ignoring all the data that doesn’t (C). Suggesting that climate scientists are somehow incentivised to promote alarming possibilities (C).

There are, of course, many other examples, and each technique can also be considered in more detail, as illustrated by the figure below. I do think it’s worth being more aware of this and trying to work out how to identify when someone, or a group, is promoting misinformation. I would add that it’s probably worth being careful of assuming something is misinformation just because it appears to satisfy one of the techniques highlighted below. It’s probably better to consider if there is a pattern of spreading misinformation, rather than being too quick to judge on the basis of what might be an individual example.

This entry was posted in Climate change, Philosophy for Bloggers and tagged , , , , . Bookmark the permalink.

153 Responses to FLICC

  1. Charles Steven Nagy says:

    I have shared this graphic with my Climat Change Denialist friends. It has made not one scintilla of difference. As always, they know best, supported as always by their favorite right wing press….

  2. Charles,
    That doesn’t really surprise me. I think the target audience should be those who are interested in trying to identify misinformation and would to understand how to do so and the various techniques that might be used, rather than those who are already spreading information, or for whom the misinformation is already appealing.

  3. Mark Gobell says:

    Does any of that apply to those who advocate climate alarmism?

  4. Mark,
    It applies to anyone, in principle.

  5. Mark Gobell says:

    Splendid.

    So, in principle then, if one searched for instances of same, they would be found among said advocates.

    So why do you post this information as if it relates to one side of the climate argument only?

    Are you guilty of similar misdirection that you are accusing your opponents of?

  6. Mark,

    So why do you post this information as if it relates to one side of the climate argument only?

    They were just examples.

    Are you guilty of similar misdirection that you are accusing your opponents of?

    Not really for me to judge, but my experience of this is that it tends to be more prevalent on one side than the other, but there has been a recent tendency for some to exaggerate the potential impacts.

    My other experience is that discussion like this rarely go well. Of course these techniques can be used by anyone and it’s worth – in my opinion – considering that some who you might generally agree with are also using these techniques. We shouldn’t excuse it just because it’s coming from someone, or a group, you agree with.

    However, as I said at the end of the post, I do think it’s worth being careful of applying this in some simplistic way. Sometimes credentials are relevant. Sometimes selective bits of information can be relevant. Sometimes it is worth gathering more information before making difficult decisions. Sometimes there really is what might be regarded as a conspiracy.

    So, I tend to think these are useful heuristics that can help to identify misinformation, but they’re not perfect and there is probably a difference between someone who might sometimes seem to use these techniques, and those who do so regularly and who might be using many of these FLICC techniques.

  7. Mark Gobell says:

    [Mod: As I mentioned in an earlier comment, these discussions rarely go well.]

  8. Chubbs says:

    Mark – Sure both sides are human and make the same mistakes. The difference is one side is rooted in science. Easier to fall prey to a logical fallacy when rejecting the scientific consensus,

  9. Mark Gobell says:

    From my few hours of research into these matters it is abundantly clear that the “mishteaks” as you characterise them, included the politically driven, IPCC 1995 Science Report volte face that reversed the conclusion of scientists. To me, that resembles politics not science. Am I mishteaken do you think?

  10. Mark,
    Yes, I suspect you are. For starters, the IPCC reports have a number of parts, one of which is called the Summary for Policy Makers. This does indeed require approval all the governments and can be influenced by politics (even if the scientists involved do try to push back). The main chapters, however, do not require such approval.

  11. Mark,
    I’m guessing your IPCC 1995 comment refers to the controversy involving Ben Santer. Might be worth reading this. Of course, Ben Santer was obviously correct.

  12. dikranmarsupial says:

    Charles wrote: “I have shared this graphic with my Climat Change Denialist friends. It has made not one scintilla of difference.”

    While I avoid the d-word (nothing good comes of it IMHO), that is kind of what separates “denialists” from genuine skeptics. Being skeptical and asking questions is good, but only if you are willing to engage constructively with the answers. Nothing will make a scintilla of difference with “denialists” as they are not interested in challenges to their own position. They are worth engaging with sometimes as a means of correcting the misinformation for the lurkers in the discussion, and *they* may benefit from seeing the FLICC graphic and learning to recognise the difference between truth-seeking scientific discussion and identity-reinforcing rhetoric.

    Fortunately not all climate skeptics are “denialists”.

  13. dikranmarsupial says:

    Mark wrote “So, in principle then, if one searched for instances of same, they would be found among said advocates.”

    Of course, but they would be a lot less common. If the science is on your side, there is little to be gained by using these sort of rhetorical strategems. I do get blocked by more extreme “warmists” (for want of a better term) when I point out their overstatements etc. It is human nature. The best solution is to make sure *you* don’t use these strategems yourself, and the first step is to be aware of their existence. FLICC is not by any means the first attempt at this sort of thing, I’m rather fond of Schopenhauer’s “The Art of Always Being Right”, which was not actually a “how to” guide! ;o)

  14. dikranmarsupial says:

    “Mark wrote “So, in principle then, if one searched for instances of same, they would be found among said advocates.”

    That might be “cherry picking” though ;o)

  15. The category that Gobell may be referring to are Type I an Type II errors, false positives and false negatives, which can go either way. One approach not mentioned in the chart is that of “overfitting”. A time-series or relationship can be fit by a model so perfectly that it can become a false positive. Ned Nikolov relies on overfitting in his nutty model of planetary temperatures. A nagging issue with AGW is that it really is only a single-degree-of-freedom (1DOF) trend and thus can accommodate any number of false-positive models. The flip-side is that models of time-series with a huge amount of complexity (many apparent DOFs), such as climate dipole indices can be dismissed as false negatives because the non-linearity is not completely understood, e.g.. reject a true deterministic model as random or unpredictably chaotic.

  16. Dave_Geologist says:

    Re 1995 Mark: a twofer! Both C’s!

    Well done. An object lesson.

  17. Magma says:

    One of the “Fake Experts” subcategories that I find mildly interesting but deeply annoying are scientists who take a strong contrarian stance against consensus climate science even though – or because – climate is well outside their own fields of expertise. [Which to be honest they might not be very good at either.] This seems to be a combination of the magnified minority and bulk fake experts subcategories. Excluding engineers, these seem to be disproportionately drawn from the ranks of geologists. Many geologists’ stereotypical aversion to mathematics may play a role here, as well as the fact that many of the contrarians work in or adjacent to mineral and fossil fuel extraction industries.

    But there are other odd exceptions, and I was thinking of this while looking at sunspots and total solar irradiance data from the current TIMS mission. With the Sun now well into an active cycle, I wonder if Valentina Zharkova will be issuing a retraction of her predictions of a new Grand Solar Minimum and associated “noticeable reduction of terrestrial temperature.”

    Probably not.

  18. Magma,

    With the Sun now well into an active cycle, I wonder if Valentina Zharkova will be issuing a retraction of her predictions of a new Grand Solar Minimum and associated “noticeable reduction of terrestrial temperature.”

    Probably not.

    I suspect you’ll be right.

  19. Willard says:

    The classification makes little sense. A slothful induction has nothing to do with cherry picking, Reinterpreting randomness might not be powered by conspiracy ideation. Moving goalposts is *not* an impossible expectation. It is an ignoratio elenchi, so it goes with red herrings.

    John should hire a philosopher who specializes in informal logic.

    I suppose none of that matters anyway.

  20. Willard,
    It may be that they’re trying to keep the top level categories quite simple.

  21. I agree with you that you should be writing about the record temps. This FLICC discussion is interesting, but I was following you on twitter yesterday and felt like you were expressing some alarm about our situation. I am curious what you think about the global temps, but it should be done in a different post if your state of calm/alarm about AGW has changed in any significant way as you observe the current global temps.

    Weather has been decent here in the PNW, but we made it up in the high 80s yesterday. Heading to 90 plus over the next week. I hope we don’t see a repeat of last year’s heat wave where I observed a temp of 117 in Centralia, WA. That’s just too hot. I scurried home where it was a more comfortable 114 and hid inside until the sun went down. Those temps are hard on humans.
    Cheers
    Mike

  22. Willard says:

    John made too many mistakes like that over the years, AT.

    Here is a simple model that should appeal to evidence-based folks:

    (A nit: the narrator confuses soundness and validity. An argument is sound if its ground is solid, it is valid if its warrant is reasonable.)

    When an argument has a problem, it usually is related to the claim being made, the grounds offered to make it, and the warrant underlying it. Every infelicity (I dislike the word fallacy as the source of the mistakes is pragmatic, not logical) should be related to one of these elements.

    If a contrarian goes on and on without making explicit the main claim, there is a problem. What is there to counter? By contrast, suppose I say that dogs are the greatest pets. They just are. The claim is clear, but where is the support? In an argumentative context, that is called a proof by assertion. Suppose I support my claim by suggesting that those who do not agree with me are sociopath lovers:

    https://theoatmeal.com/comics/cats_actually_kill

    How does my argument support my position? It does not. It is just an excuse to plug Teh Oatmeal and to start a food fight, which can be fun in some social contexts.

    As I see it that kind of classification should take into account how people argue. It should not portray arguments as a ready-made picture but as a dynamic exchange of information. We need these tools when we are arguing, not to reinforce our esprit de l’escalier.

    We could reinterpret the FLICC stuff fairly easily. Take conspiracy ideation. Often fans only armwave to grand schemes without making explicit claims. They hint at grounds without showing much: do your research, we often hear them say. What warrants their inference is usually grandiose spin where they are privileged witnesses. It usually involves what is incorrectly called logical fallacies.

    ***

    If you need a simpler model, there is always Climateball. You have a ball, end zones, and plays. The plays help move the ball toward the end zone, either by direct means (you run, kick, make long bombs) or indirect means (you chop block, screen, cover). Defense counters your advance by trying to get the ball from your hand or by knocking you down.

    I am not mentioning rules, because online there are very few, and they vary from one place to the next. Which leads to another problem with John’s classification. He is begging a crucial question: who gets to decide when a testimony is an anecdote, when a quote is cherry picking or when attacking credentials is fair play?

    To identify an infelicity is all well and good, but at some point one must be able to explain one’s stance. It is too easy to use these tools as proofs by assertion. They do not replace good arguments.

  23. Willard,
    The video was good and I agree that there are more formal ways to assess arguments. However, it seems to me that the FLICC framework is mostly identifying heuristics that can be useful, rather than being some formal process that will always identify misinformation. I do agree, though, that some of the sub-categories don’t seem to quite fit the main categories. Surely, there is merit in providing these somewhat simple frameworks, even if they’re not the formal way in which one might assess an argument?

  24. Mal Adapted says:

    “False balance” may not appear in SkepticalScience’s FLICC taxonomy, and indeed may not necessarily signify climate-science denial outright. Claims to it may nonetheless be deployed by implicit denialists. Mr. Gobel mentioned the 1995 IPCC Second Assessment Report affair, In which a fossil-fuel industry group claimed that Chapter 8 was improperly modified to support a climate-change alarmist message; a contemporary article in Physics Today makes it clear the industry group’s claim was unfounded. Does Mr. Gobel have more recent examples we can talk about? If not, perhaps his request for balance exposes his denialist agenda.

  25. Willard says:

    AT,

    I agree on the importance of such tool in the other thread:

    If someone acts in a way that affects you, it is important to be able to recognize what it is, how it works, and why it works that way. This helps us deal with it. It is a matter of self-preservation and self-responsibility. The same applies to anyone affected by what we ourselves do. After all, people are people.

    Climate change and social justice

    I’ve seen worse than that FLICC thing, like the so-called hierarchy of disagreement, which tells us very little about the appropriate level of reasoning and evidential support.

    Count the items under each letter. The *I* has one whereas the *L* has 11, including a sub-level. I like the This is Spinal Tap connotation to have a collection of 11 items. I don’t like when it comes at an expense of a lack of homogeneity. The items under F-I-C could go together. If you omit the sub-level under L, that leaves you with 23 items divided by 3 more or less evenly. I’m sure John could have thought of creating a 8 x 3 classification, or better yet a 25-square bingo!

    Also, why two C’s? Either the identifying letters are all the same, or they each are different. This is supposed to be a mnemonic tool. I’d go for a four-letter word and find a way to use a K.

    In the end, it should not matter much. John is providing work to argumentation theorists of the future, like Bacon, Locke, and Bentham did before him with their suboptimal designs.

  26. Joshua says:

    Whew,…

    At first I thought this post was about the
    FLCCC (Front Line COVID-19 Critical Care Alliance) – Ivermectin advocates.

    On the topic…

    Fwiw…

    I don’t see how this would be useful except in furtherance of polarized, social media engagement. We all know the same old same old bullshit that disinformers put out, and as Mark so nicely demonstrates, assessing what comprises disinformation is largely a subjective exercise anyway. It’s not like very many people are going to wander into an argument and then apply this matrix to decide if it’s misinformation. It works a the other way. First people look at the argument and decide if they like it, then as a result they determine whefher or not it’s misinformation and then seek to find a category to use to characterize it.

    Maybe in some academic context it could work differently but seems to me, in the real world this adds no value.

    Fwiw….

  27. Joshua,

    I don’t see how this would be useful except in furtherance of polarized, social media engagement.

    Well, I think it’s useful when trying to understand the various techniques that are used when promoting/spreading misinformation. I don’t think that spreading misinformation helps tio reduce polarisation either 🙂

    as Mark so nicely demonstrates, assessing what comprises disinformation is largely a subjective exercise anyway.

    In practice, this may be often the case, but I don’t think this means that it isn’t possible to make reasonable assessments. In other words, even if people will tend to use these as you suggest (decide in advance if something is misinformation and then try to categorize it) doesn’t mean that the categories don’t have some validity.

    Maybe in some academic context it could work differently but seems to me, in the real world this adds no value.

    I may be wrong, but I do think there is value in thinking about how one might identify when someone, or a group, is spreading misinformation. However, as I was suggesting earlier, I think these are useful heuristics but one shouldn’t apply them simplistically.

    As you suggest, you may well be able to take an argument you don’t like and find some category that fits, but that probably isn’t enough. However, if someone (or a group) regularly seem to utilise a variety of the techniques, that might be a useful way to assess whether or not they’re spreading misinformation.

    I’m also not suggesting that people should take this and use it to throw around accusations on social media. It mostly seems useful for people who are interested in trying to assess whether or not they should trust some source of information (assuming that they don’t have the time, or knowledge, to assess the details themselves). On the other hand, you may have a point that people might tend to first decide that something is misinformation and then try to fit it into some category.

  28. dikranmarsupial says:

    “We all know the same old same old bullshit that disinformers put out”

    I think that rather depends on what you mean by “we”. I don’t think the general public actually are good at recognising bullshit for what it is and worse still have an appetite for eristic rhetoric over factual accuracy and logical consistency. I think we (as a society) could do with better critical thinking skills so that we don’t fall for appealing bullshitters.

    It would help to reduce the polarisation of society if we applied these critical thinking skills to our own “side” of whatever argument and were intolerant of bullshit there. We should start with the motes in our own eyes.

    BTW I’m not sure a heirarchical presentation is the key to usefulness here, just getting the topics discussed is the important thing, but graphics are more “sticky” than lists (messaging isn’t my favourite thing, but I wouldn’t dispute that it can be effective). Schopenhauer’s list isn’t very “sticky”.

  29. Willard says:

    Alright. I’ll try to work on a framework. This will help improve my Manual, and I like this stuff. There is much to be done, e.g. moving goalposts is akin to a red herring, i.e. it’s a misdirection. Pure bait.

    While a Bingo format is tempting, I already got one, and this kind of mnemonic needs to be simpler. A simple structure could be Grice’s maxims:

    https://plato.stanford.edu/entries/grice/#ConvImpl

    (Note that Paul wrote *In Defense of a Dogma* which is a rather interesting take against one of my Avatar’s classic paper.)

    The four pillars ought to be:

    Relevance, the best concept to encompass what we think of fallacies;
    Evidence, which echoes Toulmin’s notion of grounds, ironically a shaky concept;
    Authenticity, if only to please Joshua, and also Dikran for that’s where Bullshit ought to go;
    – something related to Length and Logic, but that’s quite an ask, perhaps something related to complexity.

    That should make it REAL.

    Laterz.

  30. I think a lot of the general public may not be able to read above 6th grade level, but many of them are gaining first hand experience with global warming every day now. I think that direct experience will persuade many of these folks that the climate deniers just have to be wrong when they say that warming is not happening. They may still fall for the sun activity ideas or normal warming cycle bs, but I think it is possible to reach these folks by careful messaging. By careful messaging, I mean short and to the point. The kind of messaging that the right wing has used so successfully over the past few decades. The way that mark gobell was dispatched here looked right to me. well done there.

  31. I do wonder how many of you here who do not think of themselves as alarmists are alarmed by the current heat waves around the world? Is this alarming to any, most or all of you?

  32. Willard says:

    Thy Wiki defines “alarmism” as an excessive or exaggerated alarm of a real or imagined threat, Mike. I’d rather not reinforce contrarian frames and memes, more so if I was concerned about right wing messaging.

    Feel free to do as you please. Except for the baiting. This is a FLICC thread.

  33. dikranmarsupial says:

    smallbluemike “alarmist” is a term used to imply that the danger is being exaggerated. If you warn of a danger that is well-founded, then you may be alarming, but you are not being an alarmist. While dictionaries are not the be all and end all of word meanings, if you are going to ask whether others “do not think of themselves” as belonging to some category, a dictionary might give you some idea what *they* might reasonably think the word means.

    But to answer your question “Is this alarming…?” not particularly, it isn’t in my nature. But a problem doesn’t need to be alarming to be worth mitigating against.

    I’m not sure where the “alarmist” motif comes in to it, the question seems perfectly reasonable without it.

  34. Yes, I’ve been alarmed for a long time. I’m not sure that that means I’ve been an alarmist.

    To get back to the point I was trying to make in the post, my thinking when writing this was that it might be useful if people had ways to identify when something is probably misinformation, or when some have a tendency to promote it. However, as Joshua suggests, maybe the tendency is more that people will decide what information appeals to them and then find ways to either fit it into this kind of taxonomy, or argue that it doesn’t. I’m not sure this is a good thing, but it may well be closer to what happens in reality than the alternative that people actually try to use these kind of taxonomies to help them identify when someone is promoting misinformation.

  35. dikranmarsupial says:

    ATTP indeed “First people look at the argument and decide if they like it” hits the nail fairly squarely on the head. I don’t think it is a good thing, but it is human nature. Fortunately the thing that makes us human is that we can choose to override [evolutionary] human nature to a large extent if we are willing to put in the effort [for instance going to the gym every week day in order to be fit enough for cricket next season – it isn’t what my human nature would prioritise!].

  36. Magma says:

    – “First people look at the argument and decide if they like it” hits the nail fairly squarely on the head.

    The counterpoint to this has become a popular meme: https://clickhole.com/heartbreaking-the-worst-person-you-know-just-made-a-gr-1825121606/

  37. thanks to Dikran and Ken for weighing in. One alarmed, one not alarmed. I think the Alarmist term is essentially a way of neutralizing folks who express alarm. Certainly as DM suggests, the idea behind the label is that the danger is being exaggerated. IMO, Ken, you have not been an alarmist. I don’t think you have exaggerated the danger. You are pretty careful about that.

    From the wiki on alarmism, here is a quote: “The charge of alarmism can be used to discredit a legitimate warning, as when Churchill was widely dismissed as an alarmist in the 1930s.[4]”

    In the 1940s, fewer folks dismissed Churchill as an alarmist. I think that is how the term functions. I think it is an application of the overton window through labeling in a dismissive manner. To get back to Ken’s point in this post, I am not sure I see where the inverse of “exaggerated danger” appears in this flow chart. I know that a thing that could be described as minimized danger exists and it looks like a means to promote misinformation. That sort of thing commonly gets expressed in the manner like this: increased CO2 will actually increase plant growth and that’s good for us. Or, well, if not for the CO2 we have emitted, we would be worrying about a new little ice age. That’s the kind of misinformation that seems to be the reverse of exaggerating a danger. I suppose that minimizing danger could be done in good faith, but it generally feels like a bad faith tactic. We all get to decide about the good faith/bad faith context somehow.

    I think folks that get called alarmists are like Churchill in the 1930s. These folks perceive a great danger and they are trying to mobilize people to avoid or mitigate bad outcomes. When they are right, they may get high schools named after them. When these folks are wrong, they are just look like chicken little. I think maybe “minimizing the danger” that fits under logical fallacies, in misrepresentation or over-simplification. Some folks like to use shorthand labels and say, oh, luke warmers, or luck warmers and denialist lite or some such. I prefer to just use a common english description, such as minimizing the danger.
    Cheers
    Mike

  38. Willard says:

    That’s why I aspire to become the worst person you know, Magma.

    A simple test to spot if an argument or a contribution in general is Relevant, has Evidence on its side, is Authentic, and is what I would now call Lucid, i.e. it is clear, logical, coherent, elegant, etc:

    – What does it have to do with the price of tea? Your point?
    – Y tho? Where’s the beef?
    – Srsly? U sure? Are you kidding me?
    – Huh? Does not compute.

    The number of memes to that effect should caution us against misunderestimating people’s inner skeptic.

    ***

    I’m tempted to reword the conditions to use FAIL instead of REAL as acronym. Only two letters differ. Factiveness could replace Evidence, but I hesitate with the *I* to replace Relevance. Interconnection is kinda cool for my inner cyberneticist, however it has a material meaning, as it is related to physical circuitry. Interactive might be correct, as a non-interactive message has no connection with the preceding ones, and a reactive message can fail interactivity, say when it leads to reactance: “this reminds me when, back in my days, we used to punch hippies that got on our lawns.” Still, not simple enough.

    Working on the instances of failure for now. Choices will have to be made. Perhaps I should turn this into a post.

  39. Joshua says:

    Maybe I’m being naive, but I don’t think I need a tool like this to spot misinformation. Imo, the signs are pretty clear, although you might have to dig a bit to figure it out. It’s like that old saying, about how the definition is unclear but “I know it when I see it.”

    And on those occasions where I might not be sure, I think a tool like this one would be too cumbersome and generic to really be of much use.

    So dikran asks about what “we” means here and of course he’s right. Many people are not particularly adept at spotting misinformation but (1) I really can’t see that someone like that would apply this kind of rubric to evaluate information and (2) this particular rubric is being put forth by people who are clearly identified within the polarized taxonomy. Those who identify with the source will pretty much all already agree on what is or isn’t.misinformation and those who don’t identify with the source will see the rubric itself as misinformation (as Mark so kindly pointed out for us).

    I don’t mean to belabor the point, but I have a hard time imagining where in the real world this would be successfully utilized.

    At least in the US, we’re mostly just broken down into two camps with diametrically opposed views on what comprises misinformation. Of course there are some who are non-aligned (a diminishing number, perhaps?) and I guess in theory for some of them this rubric might be a useful tool, but like I said I have a hard time seeing where that would happen.

  40. Joshua,
    You’re probably right that those are more consciously thinking about misinformation probably don’t need a tool, and those who aren’t probably won’t use it. However, that doesn’t mean that there isn’t value in developing these kind of things so that the techniques do become more obvious. It may take time for people to become more aware of this, but that’s still better than not making this explicit.

    I may express this poorly, but I’ve often thought that science communicators (like myself) should spend more time talking about how science actually works, rather than just promoting exciting new results. This seems similar. Do people realise that just because someone is Professor X from some ivy league university it doesn’t mean that what they say is correct? Do people realise just because someone convincly presents some numbers doesn’t mean you should trust what they conclude? Do people realise that there will always be some level of uncertainty and that we regularly make important decisions even when our understanding isn’t perfect, which it never can be? etc.

    So, I kind of see this more as someone worth highlighting in the hope that people will think about how to assess what they hear, rather than as some simple fix that you just apply and everything will be much better.

  41. dikranmarsupial says:

    ” I really can’t see that someone like that would apply this kind of rubric to evaluate information”

    I’m not sure they are supposed to use it as a rubric, just a way of teaching the strategems so they are memorable and can apply them in the wild.

    “and (2) this particular rubric is being put forth by people who are clearly identified within the polarized taxonomy. “

    Not a lot they can do about that, but if they don’t post material on this, then who will? This is again the heart of the problem, the people that need to learn the critical thinking skills are themselves riddled with cognitive biases, which mean that they are too busy working out if information comes from the right side of the divide to work out whether the information is correct/useful.

    I *try* not to care when a scientific argument is posted on a skeptic blog, I try and stick to the scientific content of the argument, as in the end it is all that really matters (which is how I ended up writing the Essenhigh comment paper that was my real entry into the public discussion). I *try* to combat my own cognitive biases first. Of course we have to recognize that there is a partisan divide on these sort of issues, but we won’t make progress until we learn to put them to one side. Which sadly means we are navigating a tributary of the Feculance without a means of brachial driven propulsion.

  42. Joshua says:

    Anders –

    Again, in a theoretical realm I don’t disagee but I think in the real world, at least in the US, it’s not likely to work that way with many people. I think this parallels a bit the discussion about the information deficit model. I think the problem isn’t so much that people need explanations or tools, as it is that people’s orientation determines how they see things.

    The whole issue of “disinformation” plays this out. Broadly speaking, we have two different information universes and that orients how people define disinformation. That’s why in the US even with something like vaccination we see two different constellations of views. We can go down the list in varying degree: Jan 6, climate change, gun violence, critical race theory, transexuality,and on and on.

    I think it may be hard for people living in other countries to understand how pervasive the dichotomy is. There was a report on the radio today comparing attitudes in the UK about Boris and attitudes in the US about Donald that pointed to significant differences in the sense that in the UK, there isn’t the same strength in the over-riding partisan signal. They highlighted a Tucker Carlson clip where he offered an expert who gave an explanation that Boris had to resign because he had become too woke and librul and thus became highly unpopular. Someone who peddles that level of blatant misinformation (from one orientation) is seen as a highly reliable speaker of truth (from another orientation). The very notion of “misinformation” in itself has become a rallying cry, where even mentioning it triggers a largely impenetrable cascade of mutually exclusive viewpoints.

    I keep looking for cracks in this process but it only seems to get deeper so far. Watching the reaction on the right to the European heat wave is a case in point. Climate change has become a bigger issue in the rightwing Twittersphere than I’ve ever seen – as a source of ridicule for how dopey people are for thinking climate change is a problem and a source of schadenfreude regarding how expensive energy is going to be this winter for those fools who are turning away from fossil fuels.

    I imagine at some point this trend will have to crack – for example if climate change in the US reaches the level where it severely affects people in their everyday lives. But with the pandemic I’ve seen this dynamic break through barriers I would have previously thought would restrict the trend, as the partisan signal in covid deaths demonstrates.

    /repetitive rant.

  43. Joshua says:

    dikran –

    > the people that need to learn the critical thinking skills are themselves riddled with cognitive biases,

    I see it differently. There are people who lack the skills to think critically. But there are also people who are capable but apply those skills directionally. I used to have this argument at Dam Kahan’s blog a lot where he’d talk about how “smarter” people (as identified by tests on something like conditional probability) are more prone to motivated reasoning because they make better arguments. But my perspective is that his analysis is confounded in that there are a lot of people who think critically about non-scientific issues but don’t do well on tests of conditional probability.

    I see plenty of critical thinking among people who have a completely different view than I do on what comprises misinformation in specific context.

    So, if I’m right, there at people who really do lack certain skills. And there are people who have the skills in a context-specific manner. And there are people who posses the skills withing relevant contexts but apply them in a way that’s effectively predicted by their ideological orientation.

    So who among them is going to receive this teaching tool in a way that they’ll apply it in the wild – particularly on a manner that’s consistent with the authors’ constellation for what is or isn’t misinformation (say about climate change)?

  44. Bob Loblaw says:

    I’ve hesitated to join in, because this looks like another one of those ’round and ’round we go discussions, but Joshua’s statement “…but I think in the real world, at least in the US, it’s not likely to work that way with many people” strikes home.

    As your neighbour to the north, with more intimate connections to US culture than I like. I see U.S. politics and culture as a deeply abusive relationship. Certain people with power (political, or media) have succeeded in establishing an abusive power hold over large numbers of people. How is it abusive? The normal part of an abusive relationship where the power-holder has convinced the abusee that the power-holder is the only person that can protect them, the only person that cares about them, the only person that will tell them The Truth, etc. Add to that the “I’ll beat the %#!^ out of you if you go against me” threats (and actions), and you have the perfect storm of abuse.

    Until many of those people realize that they are in an abusive relationship and need to get out – and have choices to get out – this will continue. And some of the abusers are well-versed in how dictators and megalomaniacs from the past have gained control, and are perfectly willing to use those methods to gain power for themselves.

    To get on-topic, the Conspiracy aspect of FLICC ties into the abuse technique of convincing people that you are the only one they can trust – everyone else is lying to you. Once that is firmly established, you have devoted followers.

    From time to time, I put links to “The Authoritarians” into blog comments. This is another one of those occasions. An e-book worth reading:

    https://theauthoritarians.org/

  45. dikranmarsupial says:

    Joshua – my point was that we are *all* riddled with cognitive biases. The question is not whether we have critical thinking skills, but whether we want to deal with are own cognitive biases or just criticise/exploit those of others. Critical thinking skills are a two-edged sword, like rhetoric, they can be used for good or for bad. Unfortunately it is our cognitive biases that are the log jam, and they won’t go away unless *we* do something about them. Having someone else expose them can prompt that, but it can also cause people to entrench. At the end of the day, it is all “bread and circuses”, the majority of people are focussed on getting bread (i.e. meeting their own needs) and entertaining “bants” (the circus). For most people, our own cognitive biases are very low on the list of priorities. I don’t think we have changed much since Juvenal’s time, and I don’t think we will.

    As for misinformation, there are things where misinformation can be identified objectively (e.g. whether the rise in CO2 is natural is not a subjective issue). What we should do about it is very subjective. In general it isn’t about the science, that is just a means of avoiding discussing our values, which is where the problems really lie.

    All IMHO, of course, I have no expertise on this issue.

  46. Willard says:

    Speaking of authoritarianism, this comment caught my attention yesterday:

    You are the one writing nonsense.

    I have a PhD in molecular genetics so, unlike you, I know what I am talking about.

    I am one of the few people today that has read “On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life” by Charles Darwin from cover to cover, and I enjoyed it immensely. I have read what Ernst Mayr and Theodosious Dobzhansky had to say about evolution. I have studied the experiments of Thomas Hunt Morgan and Hermann Muller on the mutagenesis in Drosophila. I have personally genetically modified thousands of organisms and conducted selection experiments on them. I have studied evolution for four decades.

    Everything you say speaks volumens about the size of your ignorance on the matter. You should be ashamed of expressing those opinions. They aren’t worth nickels, not even a cent.

    https://judithcurry.com/2022/07/16/week-in-review-climate-edition-4/#comment-977862

    That is from a contrarian who wrote a lot of essays at Judy’s and Tony’s. One was featured here:

    Only Connect

    I wonder how he would react to a comment in which a climate scientist would reveal their scientific and scholar achievements and told him that his opinion was not worth a cent. I bet he would pontificate about how science is only about evidence, nullius non verba, and all that jazz. Typical double standard.

    An item like Fake Expert cannot help protect against that kind of abuse. In fact it tells us nothing about authority. Not all appeal to authority is OK if it comes from a majoritarian standpoint – that would be an ad populum. Context matters to determine if an appeal (the ad populum belonging to what we call the ad arguments for they are characterized by what they appeal to) is justified or not.

    Whether justified or not, Javier’s double standard remains. Once he treats other contrarians like that, he cautions otters to do the same with him. And then we get a world where all PhDs turn one-eyed. I suppose this is the price for keeping each other in check…

    People learn by imitation. Javier does what otters did to him. Gentle reciprocation goes a long way to break that circle. How we react when reciprocation fails might go a longer way.

    There is a time to protect oneself, and having critical thinking aids should in principle help for that. They do not call it Intellectual Self-Defense for nothing. But just like self-defense techniques, they do not prepare for real showdowns. The two themes are connected by a philosopher who enjoys both in a project he calls Bullshido:

    https://www.bullshido.net/

    So I would agree with Joshua – there is nothing like a Mike Tyson experiment to see how our technique fares. Which is why I spend some time at Roy’s. No rules. Everything allowed. So far my own Climateball technique is doing alright. Not sure how well this FLICC thing would do.

  47. dikranmarsupial says:

    “particularly on a manner that’s consistent with the authors’ constellation for what is or isn’t misinformation (say about climate change)?”

    I’m not sure that is an issue. I suspect John would be happy with critical thinking skills were applied consistently in all directions.

  48. Joshua,
    Yes, there may well be a similarity with what is called deficit model thinking. However, my view with this is similar to that. Yes, filling some knowledge deficit isn’t suddenly going to change people’s views, but there is still value in communicating this information, even if it shouldn’t be the only communication strategy that people use. Similarly, I think it’s work trying to identify techniques of science denial and providing these as heuristics that people can use. It may not have an immediate impact, but it seems preferable to not doing so.

  49. Willard says:

    That would call for more symmetrical tool, Dikran, no?

    I think we need both kind of tools, for there are contexts in which symmetry breaks, like a scientific debate with a Received View, and other friendlier contexts in which we all look in the same direction, as St-Exuspéry would put it.

  50. Bob Loblaw says:

    “People learn by imitation.

    Another characteristic of abuse. Abusers often were abused themselves. Especially when it is parent-child abuse. Like father, like son. [cough]Drumpf[cough]

  51. Willard says:

    Which is why I think Kurt Junior nails it when he says:

    There’s only one rule that I know of, babies—God damn it, you’ve got to be kind

    which Tim Doyle nailed splendidly:

    This is not about policing tone. This is about love. This is about being authentic. We can care for others and still tell them:

    Our purpose here is to make it abundantly clear that nobody is impressed that you are aware of logical fallacies, and whipping one out in an argument like you’re B. Rabbit in a rap battle having just discovered that Clarence has loving and supportive parents, makes you the Clarence in that situation.

    https://www.bullshido.net/dont-be-the-fallacy-guy/

    To recognize bad reasoning habits is all well and good. To act as a 2-yo who discovered pointing-and-naming does not a conversation make. Compare and contrast:

    (1) You are using a strawman argument. Please stop committing fallacies or else I will go home and pout.

    (2) This is not what I said. I said P, and supported it with X, Y, and Z. Do you have anything against that, or are you going to play Fallacy Man for a week?

    It should go without saying that I am more the second type. I find it more elegant, open, and egalitarian. No, that’s not polite. Who cares about politeness. Being kind has nothing to do with being polite.

  52. dikranmarsupial says:

    Willard, I think FLICC, with a change of caption, is probably equally applicable to “alarmist” misinformation (e.g. Wadhams). I am certainly in favour of applying tools symmetrically and not behaving as if we have a privileged position in the discussion (even if we do) and a more symmetric tool would be my personal preference.

    I’d agree that the appropriate tools are context dependent. For instance should we have “genuine expertise” in the framework? From a scientific perspective, the source of the argument is [ought to be] irrelevant and even genuine experts can be completely wrong from time to time, so you can’t be sure that an argument isn’t misinformation because it comes from a genuine expert. But at the same time for the lay-person, identifying genuine expertise is a rational step in forming an informed opinion.

  53. Willard says:

    Agreed, Dikran. As I see it, a neutral tool would present a heuristic to check if one understands what is being said, trusts who says it, judges how well it is supported, and recognizes its importance in the grand scheme of things. More importantly, it should help us accept our own limitations. Critical thinking tools are like moral dilemmas – they should keep ourselves in check.

    A neutral way of thinking about Climateball would not suffice. The opposing teams have different roles. They have different responsibilities. For starters, the onus is on contrarians to beat the Established View. They have to one-up the IPCC. If they can’t, they lose. Assuming they can lose, which is a stretch:

    Can Contrarians Lose?

    On the other hand, the consistency constraints are relaxed on their side. They can hold an infinity of conflicting views. They can even “racehorse” if they want, which is to say that they can argue the alternatives. We can’t. We can disagree on side issues, but we need to stick together for the critical parts.

    A symmetrical tool cannot help model the norm that racehorsing is cool for them and not for us.

  54. dikranmarsupial says:

    “A neutral way of thinking about Climateball would not suffice” indeed, which is why I try and avoid involvement these days – I don’t think I can make a useful contribution any more by sticking to the science. There was a time when there was reasonable uncertainty on substantive issues, but that time seems to have passed.

  55. Willard says:

    [DIKRAN LOOKS AT CLIMATEBALL]

  56. dikranmarsupial says:

    Guilty as charged!

    Gavin should concentrate on real life ;o)

  57. Tom Fuller says:

    willard, if Climateball is based on such faulty assumptions, no wonder it has yet to replace Scrabble among the Pantheon of board games.

    You write, “For starters, the onus is on contrarians to beat the Established View. They have to one-up the IPCC. If they can’t, they lose. Assuming they can lose, which is a stretch.” Both assertions are incorrect. And that is obvious. And that is science. Those opposing a hypothesis not only don’t have to beat it, they have no obligation to offer any alternative whatsoever. They just have to point out flaws in the Established View. If those flaws prove fatal, then contrarians have done the task appointed for them by our current model of the scientific method. That is our job. Our job is not to please you, make peace with you, reconcile whatever model of the universe we hold in our feeble little contrarian brains with your obviously gargantuan intellects. We are error checkers at play in the fields of the Lord.

  58. Tom,
    For starters, I’m not sure why you think you get to be the error checkers. Also, it seems that you’re working from the framework that finding some error in some established view provides validity to some alternative, but that doesn’t really make sense. The validity of some alternative should be subjected to the same level of checking, it shouldn’t really depend on tests you’ve made to the “established” view.

    Also, in this kind of context, everything has flaws. No understanding is perfect. So, your “method” runs the risk of falling in to the “impossible expectations” trap. Finding a flaw is one thing, establishing the significance of a flaw is another.

  59. Tom Fuller says:

    ATTP, you are correct–but that doesn’t change either our mission or the dichotomy in roles. We don’t need to offer an alternative. Our expectations might be ridiculously impossible.

    It is obvious, looking over the past couple of decades, that consensus scientists have a roughly correct view of many of the forces changing our climate. It is obvious that their broad predictions of rising temperatures and sea levels are soundly based. Good job! Congratulations!

    But there is a wide range of possible outcomes inherent in your views of the future. Some amongst you are championing a narrow range at the most pessimistic end of that range. Most contrarians (well, the ones I like, anyways), focus on finding the errors (and there are many) in the calculations and prognostications made by those convinced that the most pessimistic view is correct.

  60. Tom,

    ATTP, you are correct–but that doesn’t change either our mission or the dichotomy in roles.

    Possibly, but that’s why it can be useful to develop taxonomies that describe the techniques that some might be using.

    But there is a wide range of possible outcomes inherent in your views of the future.

    Yes, of course, but it is still possible to influence the future and I tend to think we should actively try to do so.

  61. Tom,
    I should add that the goal of science, or research, is to try and understand things. The process is inherently complex and if we want to understand things that we don’t yet understand well, there will of course be results/conclusions that turn out to be wrong. This is a natural part of the process. Simply highlighting these “errors” generally neither really helps us to better understand things, nor makes a constructive contribution to the process.

  62. Tom Fuller says:

    ATTP, obviously I agree, far more than most contrarians. I have tried to do my part. But that doesn’t mean I agree with those who are out at the end of the consensus range, nor does it mean I won’t try and point out the flaws in their arguments. (I was going to write science instead of arguments, but that’s really beyond my competence. Decidedly lower math brought me to my lukewarm view, but the beauty of lower math is that it is usually rock solid and easily comprehensible.)

  63. Bob Loblaw says:

    “We don’t need to offer an alternative.”

    We don’t need to listen to you. We don’t need to have any respect for what you say.

  64. Tom,

    But that doesn’t mean I agree with those who are out at the end of the consensus range, nor does it mean I won’t try and point out the flaws in their arguments.

    Sure, I don’t agree with them either. I tend to think that those who cherry-pick the extreme outcomes are essentially doing the same as those who cherry-pick on the other side (i.e., assume that everything will be fine). I will add, though, that there is a difference between someone claims that the worst-case outcome is now unavoidable and someone who highlights that maybe we should actively avoid worst case outcomes because the impacts could be severe.

  65. dikranmarsupial says:

    Tom “We don’t need to offer an alternative.”

    You do if you want to argue for a different course of action, which requires support for a different distribution of likely outcomes of our previous actions than that given by the established view.

    ATTP “I should add that the goal of science, or research, is to try and understand things.”

    Indeed, science is a search for the best explanation of reality. Poke all the holes you want, if it is still the best explanation, it is still the best explanation.

    Ironically these error checkers would do better if they checked for errors in their own positions, or were willing to defend their own work (yes Tom, I am talking about your book, which is an excellent example of cherry picking and quote mining, and has little or no genuine validity) or acknowledge their own errors. But they don’t and continue to spout the same old canards thinking they have identified errors in the consensus position.

  66. dikranmarsupial says:

    “(I was going to write science instead of arguments, but that’s really beyond my competence. Decidedly lower math brought me to my lukewarm view, but the beauty of lower math is that it is usually rock solid and easily comprehensible.)”

    so special and general relativity are wrong and Newtonian physics is correct?

    It is monumental hubris to think that a back of the envelope calculation (“lower math”) is more reliable than detailed calculations by people that have actually studied the science. Or that the truth should be “easily comprehensible” to someone who openly admits that the science is beyond their competence. It is astonishing that someone in that position thinks they are able to perform useful error checking of the state of the art. Skepticism needs to start with self-skepticism.

  67. Willard says:

    > Both assertions are incorrect. And that is obvious.

    Arguing by pure contradiction is something contrarians always do. Team Science cannot afford to operate by provocation and empty assertion. Yet this is a perfect weapon for Team Luckwarm.

    Note that I am talking about teams here. Players do as they please.

    The first prototype of Climateball was not a board game, but American football. Ridicule is a weapon every Climateball players should learn early, including the assistant captain of Team Luckwarm.

  68. Willard says:

    > if these flaws prove fatal

    Big if.

    Even then, unless and until we have an alternative, the Established View stands, warts and all. That is known since Kuhn at least. In science as everywhere else, we replace a game plan with another game plan, not by moving around aimlessly.

    Many contrarians seem to be under the impression that because they play the Movie Critic their whole team is defined by that role. As if writing FUD in op-Ed’s should be assigned a special worker status. Even auditors have to work a bit harder that that. The results of their constructive input gets integrated in the state of the art. And when auditors do constructive work, they get to be part of Team Science.

    John was at least correct in emphasizing impossible demands. The whole auditing business is after all based on it.

  69. dikranmarsupial says:

    “They just have to point out flaws in the Established View. If those flaws prove fatal,”

    as Willard says, a big “if” indeed. Have contrarians pointed out any substantive flaws in the “Established View” (perhaps as expressed by the IPCC WG1 reports) that have withstood scrutiny of the scientific community?

  70. Willard says:

    Take Nic, the best player Team Luckwarm has. He spot checks papers, and sometimes finds stuff that needs to be corrected. Is that the special role contrarians ought to take? Not at all.

    Everybody is a critic.

    Take what I did in my first comments in that thread. I spotted glaring errors in John’s taxonomy. What then? Nothing, Unless and until I produce an alternative, John’s work stands. It will get promoted, warts and all. Which is why I started to work on my own thing. Same for Nic. Until he started to publish papers or contact scientists to show them his work, nobody really cared about what he had to say. So I am with Bob on this.

    Now, the analogy breaks at four points. First, John’s taxonomy is a model, not a theory. Second, I am the one who defends the established view in fallacy fluff, which is called the dialogical approach:

    https://plato.stanford.edu/entries/fallacies/#DiaAppFal

    Third, the established view on fallacies is on shakier grounds than radiative transfer theory. I believe fallacy fluff ought to be burned down to the ground. Even then, until I show a better way the Stanford Encyclopedia remains as is.

    Fourth, fallacy theory is less constrained by reality than physics. Disagreement is pervasive everywhere, but it has a special place in conceptual disputes. In empirical sciences we have means to settle facts that matter in choosing theories. Less so in interpretative fields.

  71. Joshua says:

    > He spot checks papers, and sometimes finds stuff that needs to be corrected.

    Nic’s spot checking is an interesting case in point. How has he, how has the “skeptic” community, responded to his conclusion that Sweden had reached “herd immunity” in May 2020 (followed quickly thereafter by New York, London, India, etc.)?

    I know I’m a broken record, but I think this speaks to the larger issue in play here: we can argue about “science” and “truth” and fallacies all we want, but imo it has limited utility in the real world unless contextualized by ideology, psychology, cognitive biases, etc.

    Hence, I think that to the extent the rubric might have value, it’s not as a starting point (not to say anyone’s argued that it is).

  72. Joshua,
    Yes, but isn’t that an illustration of how other information can be useful when assessing what people are promoting. In the case of Nic Lewis, that he has a tendency to present work that suggests that the impact of something will probably be less severe than others suggest and never seems to be willing to acknowledge when he was clearly wrong?

  73. dikranmarsupial says:

    The problem with this sort of thing is how can we talk about ideology, psychology, cognitive biases, etc. in specific cases without it being an ad-hominem (or reasonably interpretable as an ad-hominem)? For climate in the UK, I suspect it is mostly cognitive biases rather than ideology (living standards under threat), and it isn’t clear to me that there is value in including that in a discussion about the science. I suspect that is a feature rather than a bug for those that argue about the science as a means of avoiding discussion of values/ideologies etc.

  74. Tom Fuller says:

    Mr. Loblaw, I will not lose sleep from your lack of respect. As I’m sure you can intuit, it is mutual.

    dikranmarsupial, feel free to point out where I dispute or disparage special and/or general relativity.

    And dikranmarsupial again, back of the envelope lower math calculations have served me well. If you recall, I was one of the first to point out that cumulative emissions this century were of such a scale that they (in and of themselves) would not serve as support for calculations of high ECS. Simple addition.

    Nic Lewis has served science well in some respects. He has made errors in others and exhibits a strong ‘political’ bias in his arrangement of arguments to support his position. I believe the technical term for that is ‘being human.’

  75. Willard says:

    Have you tried to contact Nic, J?

    I think you are right – fallacy fluff is far from being a starting point. Parts of it are (we teach critical thinking early in the curriculum), but without getting acquainted with the social context of online food fights, its relevance gets lost. So we need a bit of sociology before understanding fallacy fluff.

    I doubt we would achieve any better starting with models of cognitive biases and motivated reasoning. Bias theory may be in worse shape than fallacy theory. The very concept of bias is fraught with connotations it should not have. Many fans convey a value judgment that seems to presume a kind of rationality that does not exist.

    I am saying this because I believe the two areas of research suffer from the same problem. Biases, like fallacies, are fairly robust heuristic. They serve us quite well. Everybody knows they have limit.

    Take the ad hominem. Asshats may not be wrong, but they’re still asshats. Why listen to them? Life is short. To cut off asshats from one’s life is a good thing,

    Take loss aversion. We hate to lose, therefore we fail to cut our losses. Still, it makes sense to hold on to what we value and stay the course. As Jack Bogle says, to buy and hold may not be optimal, it is still better than an infinity of sexier investment paths. Vanguard is too successful to argue that being risk averse is irrational.

    It all depends on what one should expect from such heuristic. There is no view from nowhere, no silver bullet, no free lunch anywhere, including rhetoric, psychology, and finance.

  76. Tom,

    If you recall, I was one of the first to point out that cumulative emissions this century were of such a scale that they (in and of themselves) would not serve as support for calculations of high ECS. Simple addition.

    Firstly, this hasn’t actually happened yet, despite what some might claim. Suggesting that you were right about a prediction for 2100 seems a bit premature. Secondly, the ECS is a model metric, so is indepdendent of cumulative (it’s simply equilibrium warming after doubling atmospheric CO2).

    Also, if you do end up being right about cumulative emissions being lower than had previously been expected, there are a number of reasons why this might be the case. It could be that we actively did things to limit how much was emitted, or it could be that it wasn’t actually possible to emit as much as had initially been suggested.

    I’ll be very pleased if we do limit emissions so that the impacts are not severe. I will, however, almost certainly be irritated by those who suggested there was little to worry about arguing that they were therefore right, without acknowledging that the reason we probably limited emissions is because we were concerned about what might have happened if we didn’t.

  77. dikranmarsupial says:

    “dikranmarsupial, feel free to point out where I dispute or disparage special and/or general relativity.”

    I suspect you know perfectly well that it was an obviously false example used to demonstrate that your position on “lower maths” was a very weak one. If you try and apply Newtonian physics to a relativistic situation then you only need “lower maths” and an it is “easily comprehensible”, but the answer will not be “rock solid”, it will be “wrong”. If you used that to try and find flaws in astrophysics, you would make a fool of yourself. The same is true of radiative physics, sure you can make a simple model of the climate with “lower maths” that is “easily comprehensible”, but it will also be wrong. I’ve looked into it enough to know that you need a lot more than “lower maths” to make any useful contribution.

    “And dikranmarsupial again, back of the envelope lower math calculations have served me well. If you recall, I was one of the first to point out that cumulative emissions this century were of such a scale that they (in and of themselves) would not serve as support for calculations of high ECS. Simple addition.”

    You have just demonstrated my point. Nobody would argue that the direct effect of CO2 radiative forcing results in high ECS, it is all about the feedbacks. If those feedbacks are not in your “Simple addition”, then your back of the envelope calculation is meaningless nonsense and demonstrates substantial ignorance on your part. If you think you have pointed out a flaw in the mainstream position on the science, then you are making a fool of yourself as much as you would by applying Newtonian physics in a relativistic context in an attempt to point out a flaw in special/general relativity.

  78. Bob Loblaw says:

    Tom: “Mr. Loblaw, I will not lose sleep from your lack of respect. As I’m sure you can intuit, it is mutual.”

    Based on your general behaviour in this blog and others, I would expect no less.

    “I know you are, but what am I?” stopped being a convincing argument when I was about six years old.

  79. Tom Fuller says:

    Hi ATTP,

    I forgot that I didn’t make that argument here–I think you’ve gotten the gist of what I wrote wrong.

    My argument was that emissions for the first part of this century were very large–I took the numbers from CDIAC. That these emissions occurred during the notorious ‘Pause’ in the rise of GAT suggests strongly (to me, and to other contrarians) that, even understanding the normal lags between forcing and responses, the lack of an atmospheric response to a truly large pulse of CO2 in a short time, somewhat undercuts (but does not destroy) arguments for high ECS.

  80. Tom,

    That these emissions occurred during the notorious ‘Pause’ in the rise of GAT suggests strongly (to me, and to other contrarians) that, even understanding the normal lags between forcing and responses, the lack of an atmospheric response to a truly large pulse of CO2 in a short time, somewhat undercuts (but does not destroy) arguments for high ECS.

    Okay, that’s silly. The “pause” (which didn’t really happen) doesn’t somehow undercut arguments for high ECS (although this might depend on what you mean by high). The timescale for equilibrium is long. The “pause” wasn’t. The range for the ECS has narrowed a little, but that’s mostly because more studies have been done, than because warming was slow during a relatively short period when emissions were high.

  81. dikranmarsupial says:

    “Take the ad hominem. Asshats may not be wrong, but they’re still asshats. Why listen to them? Life is short. To cut off asshats from one’s life is a good thing, “

    Again, this may be a feature rather than a bug for the Asshats in question. By behaving as an Asshat towards scientists that were initially friendly/helpful, you protect yourself from their later criticism if you upset them so much by your behaviour that they can’t even bring themselves to say your name.

    It works the other way too with “argument from authority” (e.g. consensus messaging). Perfectly reasonable way to gain an informed opinion if you don’t have the background to judge the issue for yourself.

  82. Tom Fuller says:

    Obviously, the argument against TCR is even stronger.

    But if you recall, in the late 1980s and throughout the years that followed, the consensus argument was (simplified for brevity’s sake) that large increases in emissions since the end of WWII were a direct and strong contributor to the warming that we all saw during that time frame. But emissions since 2000 are much higher than during that time frame and we did not see a concomitant rise in GAT. Obviously temperatures have started rising again, so my argument loses a bit of its force. But it isn’t rising as quickly as it did during the previous phase of the Current Warming Period.

  83. dikranmarsupial says:

    Tom wrote:

    That these emissions occurred during the notorious ‘Pause’ in the rise of GAT suggests strongly (to me, and to other contrarians) that, even understanding the normal lags between forcing and responses, the lack of an atmospheric response to a truly large pulse of CO2 in a short time, somewhat undercuts (but does not destroy) arguments for high ECS.

    Logical fallacy (single cause)

  84. dikranmarsupial says:

    “large increases in emissions since the end of WWII were A direct and strong contributor to the warming that we all saw during that time frame.” [EMPHASIS mine]

    Note the multi-decadal timescale on which ENSO would largely average out.

    “But emissions since 2000 are much higher than during that time frame and we did not see a concomitant rise in GAT.”

    Well we’ll add “cherry picking” to the list. This is a much shorter timescale on which ENSO may not cancel out.

    “Obviously temperatures have started rising again, so my argument loses a bit of its force.”

    You can’t lose force from zero Newtons.

    “But it isn’t rising as quickly as it did during the previous phase of the Current Warming Period.”

    If there was never any statistically significant evidence for the existence of a pause, I don’t fancy your chances of providing statistically significant support for that assertion.

    I suspect that is “reinterpreting randomness”.

    So perhaps FLICC can be applied in the wild after all? ;o)

  85. Tom Fuller says:

    ATTP, I put ‘Pause’ in quotes for a reason. I know that temps rose a little during that period. But if Jim Hansen and Gavin Schmidt could call it a ‘stall’ or a ‘pause,’ so can I.

    By ‘high ECS’ I am referring to those who argued for ECS of 6C and higher. We all know who they were.

    My argument is not about the ‘Pause.’ My argument is that scientists said emissions from WWII to 1988 (when Hansen made his famous presentation) caused most or all of the warming experienced during that time frame. A larger pulse of emissions in a shorter time frame did not cause an equivalent rise in temperatures. What I wrote five years ago is that the two together could not be used as an argument for high ECS.

    As ECS is an accounting fiction (just as GAT is), people can and do put different time frames on how best to estimate it. As most work on impacts focus on the end of this century, that works for me. If we shifted the argument to TCR my argument would be even stronger.

    I’m not being silly, ATTP. Obviously I may be incorrect, but I’m not being silly.

  86. Willard says:

    > I believe the technical term for that is “being human.”

    Some might argue that the lack of a reasonable response to a truly large divergence in his estimate of heard immunity in a short time, somewhat undercuts (but does not destroy) arguments for Nic’s INTEGRITY ™.

    * * *

    “But Da Paws” and “But RCPs” provide a perfect illustration as to why the dialectical model of fallacies rocks. It’s bait that serves the purpose of peddling the luckwarm message in the thread. Does the FLICC help recognize pure bait? There is a mention of Red Herring, but that’s not exactly the same thing.

    A red herring is an ignoratio elenchi – it ignores the question. What was the question again? It was in response to the difference between Newtonian and Einsteinian theories, which was connected to our assistant captain’s Damascus moment. And why was this testimony invoked? Because Team Luckwarm cannot resist to mention alarmists. Again, why mention alarmists? Because AT was right to point out that science was about understanding, and that in the end We Are Science, contrarians included.

    All and all, a big string of But’s after a very short Yes.

    *This* matters more than being able to identify fallacies.

  87. dikranmarsupial says:

    Having checked the definition, perhaps there ought to be a “pareidolia” category in there somewhere (logical fallacy or cherry picking?). Reinterpreting randomness is the opposite of “cock-up theory” (c.f. Bernard Ingham)

  88. dikranmarsupial says:

    “A larger pulse of emissions in a shorter time frame did not cause an equivalent rise in temperatures.” a cherry picked period.

  89. Tom,

    Obviously, the argument against TCR is even stronger.

    No, it’s not. AR6 has a TCR likely range of 1.4K to 2.2K with a best estimate of 1.8K, while AR5 had a likely range of 1K to 2.5K. Yes, the range has narrowed, but the best estimate is probably about the same (and if you look in Table 9.5 of AR5 they give a TCR of 1.8K +- 0.6K). You’re trying to take credit for having shown something that virtually noone who works on this would probably agree with.

  90. dikranmarsupial says:

    “I’m not being silly, ATTP. Obviously I may be incorrect, but I’m not being silly.”

    You are mainly being silly by your confidence in having found a flaw in the experts position when your own position is riddled with them. It is difficult to make yourself look silly by being wrong with a bit of humility.

  91. Willard says:

    Reinterpreting randomness is the best explanation of conspiracy ideation. Misinterpretion works differently. Consider:

    [CLOV] That these emissions occurred during the notorious ‘Pause’ in the rise of GAT suggests strongly (to me, and to other contrarians) that, even understanding the normal lags between forcing and responses, the lack of an atmospheric response to a truly large pulse of CO2 in a short time, somewhat undercuts (but does not destroy) arguments for high ECS.

    [HAMM] Okay, that’s silly. The “pause” (which didn’t really happen) doesn’t somehow undercut arguments for high ECS (although this might depend on what you mean by high). The timescale for equilibrium is long. The “pause” wasn’t. The range for the ECS has narrowed a little, but that’s mostly because more studies have been done, than because warming was slow during a relatively short period when emissions were high.

    [CLOV] I’m not being silly.

    The victimization again distracts from the point being made.

    An argument can both be wrong and silly, BTW.

  92. dikranmarsupial says:

    Tom “My argument is that scientists said emissions from WWII to 1988 (when Hansen made his famous presentation) caused most or all of the warming experienced during that time frame.”

    So can you tell me what was the other large change in the atmosphere over that timespan that was not the case for 2000 onwards?

  93. dikranmarsupial says:

    My argument is not about the ‘Pause.’ My argument is that scientists said emissions from WWII to 1988 (when Hansen made his famous presentation) caused most or all of the warming experienced during that time frame. A larger pulse of emissions in a shorter time frame did not cause an equivalent rise in temperatures. What I wrote five years ago is that the two together could not be used as an argument for high ECS.

    Was anybody actually using them as an argument for high ECS, or is that a straw man?

  94. Hi ATTP,

    As I said, maybe I’m incorrect. AR6 did not change its central ECS estimate of 3C, although it fiddled around yet again with the lower and higher bounds. I couldn’t find their estimate of TCR in the Summary for Policy Makers, but that’s probably because I’m time-constrained.

    Guess we’ll see.

  95. Tom,

    By ‘high ECS’ I am referring to those who argued for ECS of 6C and higher. We all know who they were.

    I don’t think anyone credible has ever suggested that the ECS is > 6C.

    My argument is that scientists said emissions from WWII to 1988 (when Hansen made his famous presentation) caused most or all of the warming experienced during that time frame. A larger pulse of emissions in a shorter time frame did not cause an equivalent rise in temperatures. What I wrote five years ago is that the two together could not be used as an argument for high ECS.

    This completely misunderstands these basic arguments. Noone is suggesting a truly one-to-one relationship between emissions and warming on all timescales. It’s well known that there can be variability of order 0.1K on timescales of a decade or so. So, anthropogenic emissions could have caused most of the warming between WWII and 1988, and most of the warming between 1998 and 2016, without there being some major inconsistency.

    Plus, IIRC, Hansen’s argument in 1988 was that a signal was emerging that could be attributed to anthropogenic emissions. It was explicitly related to the signal then just about becoming larger than the noise (natural variability). It wasn’t explicitly a claim that the warming was mostly anthropogenic, although I suspect that this follows from the signal having started to emerge.

    As ECS is an accounting fiction (just as GAT is), people can and do put different time frames on how best to estimate it.

    No, the ECS and TCR are well-defined. There are, of course, ways of estimating them that require assumptions, but this doesn’t mean that they don’t have a definition (how much global surface warming will there be if atmospheric CO2 concentrations are doubled – transient and equilibrium).

  96. Tom,
    I gave you the TCR estimate. My point is that the narrowing of the range is almost certainly because more work has been done than because a relatively short period of slower than expected warming fundamentally changed our estimates of climate sensitivity.

  97. dikranmarsupial says:

    “… but that’s probably because I’m time-constrained.”

    you are time constrained?

    ;o)

  98. Willard says:

    I point at:

    (1) Obviously, the argument against TCR is even stronger.

    and I point at:

    (2) I couldn’t find their estimate of TCR in the Summary for Policy Makers, but that’s probably because I’m time-constrained.

    But that’s not all. Here’s one estimate:

    [Footnote 41] In the literature, units of °C per 1000 PgC (petagrams of carbon) are used, and the AR6 reports the TCRE likely range as 1.0°C to 2.3°C per 1000 PgC in the underlying report, with a best estimate of 1.65°C.

    Source: https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf

    TCRE stands for Transient Climate Response to cumulative CO2 Emission.

    Perhaps I’m wrong in identifying TCR with TCRE, but I’m not silly.

    Every Climateball episode ought to provide a good reason to RTFR.

  99. Joshua says:

    > If those feedbacks are not in your “Simple addition”, then your back of the envelope calculation is meaningless nonsense and demonstrates substantial ignorance on your part.

    This reminds me of the problem with Nic’s math about the pandemic (not to suggest others didn’t make similar errors).

    I will (in a not technically knowledgeable way) note that “pause” [logic] has a problem related to the time-frame problem that Anders mentioned: its based on measuring surface air temps which is hardly comprehensive. Even technically renown “skeptics” ignored that problem over and over (I think we all know who I’m most particularly speaking about) – reflecting the issues of bias and selective focus.

    I thought this was in interesting juxtaposition:

    > Yes, but isn’t that an illustration of how other information can be useful when assessing what people are promoting. In the case of Nic Lewis, that he has a tendency to present work that suggests that the impact of something will probably be less severe than others suggest and never seems to be willing to acknowledge when he was clearly wrong?

    And

    > The problem with this sort of thing is how can we talk about ideology, psychology, cognitive biases, etc. in specific cases without it being an ad-hominem (or reasonably interpretable as an ad-hominem)?

    And (duly noted)

    > He has made errors in others and exhibits a strong ‘political’ bias in his arrangement of arguments to support his position.

    Which goes back to my point. Those different components are true and critical (imo) – even though they might seem contradictory. Which goes back to my point that context is critical even if it is necessarily subjectively determined.

    I think that the biggest hitch is when we go from the specific (say Nic’s work on climate and his work on covid) to the general (when we place him into the constellation of the climate discussion – most notably as a “skeptic”, or I to the covid discussion as libertarian-type anti-lockdowner).

    That, I think, is where we lose site of interests and focus on positions. “skeptics” aren’t skeptics, they’re people who have an ideologically-based position on climate, and who use reasoning, sometimes fallacious and sometimes not, to make arguments related to that orientation. And “realists” are mostly, fairly described in a similar manner (of course there are exceptions and nuanced on both sides.)

    I think a related problem is when we personalize all of this – although personalization is key for understanding, it’s mostly meaningless in terms of the larger implications (the dilemma of the ad hom illustrated by Anders’ and dikran’s and Tom’s comments).

    I’m my view, this largely goes back to operational mode. Is our mode position- or interest-based?

    And that’s where I think that cognitive and psychological factors are key because they can often push us into the interest-based mode.

    There’s actually a world where Tom and I, as an example, might actually have a discussion where mutual benefit was openly embraced (or at least I like to think he might be open to a benefit) and explicitly agreed upon as a mutual goal.

    To bring it back to the rubric of fallacies – yes it could be useful in some theoretical world. In thar world Tom and I might sit down together as people with shared interests, who could, with humility, explore the rubric together in its application to climate change. But the horse has to be before the cart.

  100. Joshua says:

    Oops…

    > because they can often push us into the interest-based mode

    Should be that they often push us into the position-based mode.

    BTW, I wouldn’t be surprised if a lot of that was too much in shorthand to be decipherable, but I hope the gist come across at least.

  101. dikranmarsupial says:

    The trouble is, if you are arguing about something like ECS, the only benefit that can be mutual for me is whether we are both interested in reaching the truth, rather than whether it furthers a particular political/economic position or fits some social identity. This is because ECS doesn’t depend on politics, if politics governs someone’s acceptance of the science, that is their error and it would be dishonest for me to pretend to “share values” with them on that. If it were a discussion about what to do about climate (or climategate – which was never really about the science) it would be a different matter entirely, as “interests” are a legitimate consideration there.

    I’ve tried very hard to avoid ad-homs against Tom and tried to focus it on his arguments and his manner of presenting them. The reasons his arguments are wrong are technical and I (and ATTP) have pointed them out, but you do make yourself look silly by making overly confident criticisms that are riddled with technical errors and misunderstandings, and it is reasonable to point that out (Golden rule – I worry about “going emeritius” myself one day).

  102. dikranmarsupial says:

    “That, I think, is where we lose site of interests and focus on positions. “skeptics” aren’t skeptics, they’re people who have an ideologically-based position on climate, and who use reasoning, sometimes fallacious and sometimes not, to make arguments related to that orientation.”

    Yes, that is indeed mostly the case. I generally call them “skeptics” simply because we need a label and that one doesn’t cause them offence (and it avoids “persecuted victim” discussions about other labels that serve no useful purpose).

    “And “realists” are mostly, fairly described in a similar manner (of course there are exceptions and nuanced on both sides.)”

    I don’t agree with that. I don’t see why you can’t have a “realist” position based on an understanding/acceptance of the mainstream scientific position. I don’t see deferring to experts as an ideological-based position. Now I would say that “alarmists” may well have ideologically based positions, but I don’t think it is a fair categorisation of those who are merely following mainstream science (and bring in economics and politics afterwards when trying to decide what to do about it).

  103. thanks to Joshua for all of that, but this really struck me:
    ““skeptics” aren’t skeptics, they’re people who have an ideologically-based position on climate, and who use reasoning, sometimes fallacious and sometimes not, to make arguments related to that orientation. And “realists” are mostly, fairly described in a similar manner”

    I think I might prefer: “label” aren’t label, they are people who are identified by others as having an ideologically-based position, who use reasoning… etc.

    In that framework, I would be called an alarmist, but I am not an alarmist, I simply follow the science and the news and I am alarmed by what I see. My reasoning and analysis may be fallacious or not, but my take on the state of things is based in a good faith attempt to understand what is happening in and to the natural world and to engage in communication with others about those subjects.

    I am trying to figure out what it means to be interest-based or position-based. Sorry to be slow on getting that. A lot of this labeling and categorization just has no interest to me. I tend to stick to fundamental descriptive english because I think the labeling shorthand is as Joshua describes it, it seems reductionist and rhetorical to me. I don’t find that approach to be useful with understanding what others are trying to tell me, or at helping others understand what I am trying to say.

    I do commonly make one large categorization of others in the web-o-sphere and that is whether they post in good faith and a civil manner. I generally just ignore the folks who are not ticking those two boxes.

    I will re-read a bit to see if I can absorb the interest-based versus position-based distinction. I suspect this distinction might be deep and useful to me.

    Cheers
    Mike

  104. Joshua says:

    dikran –

    Maybe we aren’t working from a shared concept of positions versus interests. And I’ll acknowledge that sometimes, it’s very tricky to distinguish the two.

    But in this case, the shared interests would be something like not wanting people to unnecessarily suffer because of harsh conditions or a lack of access to energy. I think at the base, the vast majority of us share those interests. Of course, breaking those interests into smaller parts is very complicated. Does that mean we have a shared interest in mitigating warming? Prolly not. Or a shared interest in providing access to renewable-based energy instead of FF-based energy? Clearly not.

    So the main problem comes about when we stake out positions related to those interests. And I think that’s where a discussion of different interpretations of the science can take place – but only if the agreement on shared interests remains dominant.

  105. Joshua says:

    dikran –

    We have a similar view in the label of “skeptic.” I use “realist” (also in quotes) as the parallel label. They are merely labels and aren’t descriptive.

    So that said,

    > I don’t agree with that. I don’t see why you can’t have a “realist” position based on an understanding/acceptance of the mainstream scientific position..

    My use of “realist” isn’t descriptive. It’s hard to wrap your head around. But it’s just like how “skeptic” isn’t descriptive. We’ve all seen where some “skeptics” aren’t skeptical and some “realists” aren’t realist.

  106. Joshua says:

    mike –

    > I am trying to figure out what it means to be interest-based or position-based. Sorry to be slow on getting that.

    It’s complicated and I’m not sure that it totally stands up to scrutiny (I’ll guess Willard says it doesn’t) but I think if it as one those wrong but useful models.

    Gotta go water the gardens but I’ll try to get back later. A Google of “positions versus interests” would probably work.

  107. dikranmarsupial says:

    “Maybe we aren’t working from a shared concept of positions versus interests.”

    yes, quite likely, most probably because I may be a bit different (c.f. Willard’s cartoon up-thread ;o)

    “But in this case, the shared interests would be something like not wanting people to unnecessarily suffer because of harsh conditions or a lack of access to energy.”

    Well, I certainly share that interest, but it is totally irrelevant to discussions of the plausible values for ECS. I think it is likely that we would be acting against that interest for a discussion of ECS to be influenced by that – self-delusion is very rarely a good thing.

    “Does that mean we have a shared interest in mitigating warming?”

    This is very much my point, shared interests are very important for working out what to do about ECS, and should be discussed there, just not in discussions about the distribution of plausible values of ECS.

    “So the main problem comes about when we stake out positions related to those interests. “

    Our position on the science should not be related to those interests. The science and our interests should both be independent inputs into the discussion of what to do about climate change.
    .
    And I think that’s where a discussion of different interpretations of the science can take place

    Different interpretations of the science should be based on scientific considerations, not ones based on interests (which are irrelevant and it is a potential pitfall to be influenced by them).

    Unfortunately, while scientists can mostly have purely scientific discussions of interpretations of the science, we (apparently) can’t have discussions in the general public without interests. Unfortunately I don’t think there are any solutions to this, but I am pretty sure that not explaining why interests are irrelevant to the interpretation of the science (and tacitly encouraging their incorporation) is not a solution.

  108. dikranmarsupial says:

    “My use of “realist” isn’t descriptive. It’s hard to wrap your head around. But it’s just like how “skeptic” isn’t descriptive. We’ve all seen where some “skeptics” aren’t skeptical and some “realists” aren’t realist.”

    I think the problem with those labels is that it isn’t binary “skeptics” and “realists”, but more “skeptics”, “realists/mainstream” and “alarmists”, so it may cause less confusion if you used “alarmist” instead. I think that would probably be more intuitive?

  109. Willard says:

    > it is totally irrelevant to discussions of the plausible values for ECS

    It is totally relevant to the fact that you discuss them, however.

    We’re not truth machines. We discuss things because we care about them, and because we care about discussing them. We’re not here to recite decimals of Pi.

    You’re perfectly in your right to ask that those with whom you exchange care about truth. The point I am making is that truthfulness is as much a matter of authenticity as factuality. It requires both to make factive claims.

    When Galaxy Brain gurus stop being truthful, their line of work changes. Their fans might always knew they were indulging in truthiness, but when they realize that their guru does not even care about if what they say is true or not, then there’s only theatrics. The fan base changes, and we get into tin foil hat territory.

    One good way to support one’s truthfulness is to take a principled stance on your values and your interests. One bad way is to shy away from your humanity. It’s bad because it disconnects you from the person you’re trying to convince.

    Even the Silver Surfer cares about humans.

  110. Tom Fuller says:

    [Playing the ref. -W]

  111. Bell curve stuff to some extent. the outliers at 2 plus standard deviations are commonly just wrong. It probably doesn’t matter how or why folks stake out positions or fall in at 2 plus SD, the reality/truth is that the outliers are probably just pretty wrong at both ends of the curve.

    I do think that the rate and advance of AGW is not unlike the red shift that was noticed early in astronomy. If you review the predictions of rate and advance of AGW over time, I believe there is something akin to a red shift toward faster change/advance and impact. The bell curve still works, but it probably shifts in the direction of faster and more impact as time passes.

    Cheers
    Mike

  112. izen says:

    I think there needs to be a third category, conspiracists to the right of sceptics.

    FLICC may have been an effective weapon when climate science could still be denied or doubted due to the inherent uncertainties and sceptical scientific voices. But since AGW has become self-evident in the daily events in people lives, the method to perpetuate the businesses that requires continued emissions of CO2 have become simpler. Label climate science as part of the hoax sold by ‘Them’ along with vaccines and education. Unfortunately society has enough dissatisfied, disenfranchised people who find such claims credible, simple, and satisfying explanations for the inequalities they suffer. Certainly a simple story trumps a complex account every time….
    The percentage who prefer a conspiracy to reality will always exist when reality requires logical analysis and the engagement with abstruse causal systems rather than a uncomplicated narrative.

  113. dikranmarsupial says:

    “it is totally irrelevant to discussions of the plausible values for ECS

    It is totally relevant to the fact that you discuss them, however.

    We’re not truth machines. We discuss things because we care about them, and because we care about discussing them. We’re not here to recite decimals of Pi. ”

    I would say that there is a spectrum of people that range from very close to truth machines to people at the other end that have no interest in truths and are almost entirely emotionally driven. I discuss things like ECS because I care about truth, partly because I also care about rational policymaking on important issues. We are more likely to get rational policy if we can divorce scientific and “issues” when we are discussing purely scientific issues. As an example, if your acceptance of the science is driven by the (entirely laudable) issue of depriving people of the benefit of fossil fuels, then that might encourage you to accept Tom’s extremely poor ECS arguments. If you were to deploy them in a policy making setting, where your opponents understand the science well enough to spot nonsense when they see it, you will be laughed at and you will have done a disservice to the issue you care about.

    Unfortunately we have multiple “interests” – not wanting people, largely in the developing world, to suffer from the worst effects of climate change is another and it is antagonistic to the issue of people forgoing the benefits of fossil fuels (possibly also in the developing world). So we need to reach a compromise, and the optimal compromise is best reached by taking a view on the science that doesn’t depend on our “interests” (where they are irrelevant and potentially misleading) so that we can use it as a good input to policy making, where are “interests” are of key importance.

    So in short, I am a “truth machine” on this particular topic, precisely because I care about people (like the silver surfer). I’m not shying away from my humanity. Indeed one of the things that makes us human is the ability to make the rational decision to override our cognitive biases where they are unhelpful.

    Like the silver surfer (apparently) I do get a bit frustrated by the way society behaves. It isn’t a good reason to abandon rationality myself.

    I’m not here to recite the digits of PI either (e perhaps, but not pi ;o)

    BTW there is a common “Autistic” caricature of science that I see quite a lot, but scientists are not actually like that and people with ASD generally are not much like that caricature either. You can be rational without being 1 dimensional.

  114. Willard says:

    Totally fair, Dikran.

    Your last comment was most persuasive to me. It shows passion and courage. You yourself are more interesting to me than the usual litany of facts we all know anyway.

    I have little Silver Surfers at home, so I might be biased.

  115. dikranmarsupial says:

    “One good way to support one’s truthfulness is to take a principled stance on your values and your interests. ”

    I completely agree with that. Rationality where rationality is required is on of my values. Honesty is also a value, which is why I can’t take the “shared values” approach to scientific questions with someone that cannot accept my stance on the irrelevance of “interests” on scientific issues. I don’t insist they share my view on this, and am happy to discuss the science with them anyway, but it would be dishonest to pretend I didn’t take that position, and it is better to do so openly.

  116. “The trouble is, if you are arguing about something like ECS, the only benefit that can be mutual for me is whether we are both interested in reaching the truth, rather than whether it furthers a particular political/economic position or fits some social identity.”

    This is helpful to me with understanding your comments. Not that you would care, or that I care particularly either, but if there is an authentic attempt to exchange information, then these things may be helpful. In that sense, I read your comments because I think you make a pretty consistent authentic attempt to communicate and exchange information.

    I understand you somewhat better if/when I realize that you have a focus on something like ECS as a general rule. In that way, what I am interested in is reaching a truth about is something quite different from ECS. The benefits accrue for me if we are both interested in reaching a truth about how we maintain a vibrant and generally healthy ecosystem on this planet.

    Each of these points of truth, the true ECS number or a healthy planetary ecosystem are embedded in very complicated biological/geological/climatological matrices. I think we can nail down all of the particulars of describing the truth of ECS or a healthy planetary ecosystem as soon as we finish calculating Pi. Until that time, we may need to accept that these “truths” are moving targets to a certain extent.

    Probably my last word on this. I have plenty to think about based on what has already been shared.

    Cheers
    Mike

  117. dikranmarsupial says:

    “I have little Silver Surfers at home, so I might be biased.”

    I think I need to research a bit on the silver surfer, I think I can vaguely remember the film…

  118. Windchaser says:

    A larger pulse of emissions in a shorter time frame did not cause an equivalent rise in temperatures.

    Ok, but.. so what?

    GHG work on a time lag; retaining some extra heat in the atmosphere as soon as they’re introduced, an effect that gradually decreases over time (i.e., as the atmosphere warms).

    From that fact alone, we wouldn’t expect a large pulse of emissions over a shorter time frame to have the same effect. We’d expect it to have a much smaller effect.

    TL;DR: short timeframes don’t tell you much about ECS

    And, I’m kinda frustrated with you here. You’ve been around the climate debate for, what, well over a decade? And you still haven’t learned the basics of how GHG are supposed to work, enough to critically analyze your own claims?

  119. Joshua says:

    > if you used “alarmist” instead. I think that would probably be more intuitive

    Prolly would. But I’m stubborn. Sceptic and realist are both complimentary labels. Everyone wants to be aooeipately skeptical and a realist. So that’s why I use them, with quotation mark, to imply that the label isn’t deceptive. I figure if I can’t get to the point with someone where labels are clarified, and people don’t take shit personally, then the discussion’s mostly a waste of time anyway.

  120. dikranmarsupial says:

    Joshua “Sceptic and realist are both complimentary labels. ” very true, there is no positive spin on “alarmist”!

    “I figure if I can’t get to the point with someone where labels are clarified”

    I can’t guarantee to remember for next time though – I have a good memory for some things but not others.

  121. Joshua says:

    > but it is totally irrelevant to discussions of the plausible values for ECS.

    Not in my experience. It’s irrelevant to the science but not the discussions, because there are inherent uncertainties and if you’re arguing from positions you (almost inevitably) view the uncertainties differently than if you’re exploring shared interests. So then you need to explore interests as a frame and then you can discuss the uncertainties and the related positions.

    .
    Just skimmed but I gather Willard said something similar. I’ll get back to the rest later.

  122. dikranmarsupial says:

    It’s irrelevant to the science but not the discussions, because there are inherent uncertainties and if you’re arguing from positions you (almost inevitably) view the uncertainties differently than if you’re exploring shared interests.

    I don’t agree. The uncertainties are epistemic and/or aleatory in nature, neither of which depend on our interests. How those uncertainties affect our decisions about what to do (how you view the uncertainties?) is where those interests are relevant, but not in the assessment of the uncertainty itself.

  123. jacksmith4tx says:

    The problem is nearly intractable because of our inability to think long term. Psychologists Daniel Kahneman and Amos Tversky wrote “Thinking Fast and Slow” and pretty much closed the door on humanity rising to meet the challenge.
    “Behavioral economist and Nobel Memorial Prize winner Daniel Kahneman has described climate change as a “perfect storm” for the human brain. “It’s distant. It’s abstract,” he said. That’s why the overwhelming scientific case for human-caused climate change has failed to produce much action, especially in the United States. When asked what might be done about this roadblock, Kahneman said that paying attention to the powerful influence of religion and spirituality might move the needle. “That would change things,” he said. “It’s not going to happen by presenting more evidence.”

    Behavioral economics is what gave us negative real interest rates, global debt north of 300 trillion dollars and a estimated world debt to GDP ratio of 207%. Looks strikingly like our collective response to climate change.

  124. dikranmarsupial says:

    Reminds me of Bayesian decision theory. If we have a medical screening test, there are usually two components, a system to predict the probability the patient has some disease, given the symptoms they report, and a threshold probability that tells us whether it is worth sending the patient for more expensive or invasive tests. The setting of the threshold depends on various “interests” – additional tests may have financial costs (so we want to avoid false positives when we perform expensive tests on healthy patients), but also we have a problem with false negatives – sending patients away as “probably healthy” when they are not, and may end up far more ill (or even dead) before they finally receive treatment. The prediction of the probability of the disease does not depend in any way on these “interests”, and including them in the design of the predictor can only make the performance of the predictor worse (and thus our expected losses higher). Analogously, our understanding of the science of climate doesn’t benefit in any way by considering “interests”, and it cannot help us make better decisions later (where interests are relevant and should be considered) if we have a flawed assessment of the science that has been irrationally biased by our “interests”.

  125. dikranmarsupial says:

    jacksmith4tx wrote “Psychologists Daniel Kahneman and Amos Tversky wrote “Thinking Fast and Slow” and pretty much closed the door on humanity rising to meet the challenge.”

    Sadly I agree with them, and it isn’t the only challenge we will fail to meet. Personally I think our best chance would be for bullshit to become socially unacceptable – surely there must be a point where societies appetite for it becomes sated and it starts to cloy?

    Very good book though!

  126. Joshua says:

    > if we have a flawed assessment of the science that has been irrationally biased by our “interests”.

    Just to clarify, I’m not suggesting that the science itself changes based on interests.

    Our understanding of the science, in the real world, often does.

    Our discussion of the science often does as well.

    But more importantly, imo, is that our discussion of the science should be rooted in establishing what our interests are – ideally with establishing where our interests are shared.

    Then, I think, it’s important to explore where we tend to assume that our (own) positions and our (own) interests are completely congruent when in fact they aren’t.

    I think that mostly what takes place is that people argue about positions, where they seem incompatible when, ultimately, progress comes from finding out the pathways to realizing common interests.

    This is the foundation to a particular approach to public policy development and conflict resolution. It’s not particularly realistic in the current environment. But in my view, there isn’t any particular other approach towards progress that’s likely to bear much more fruit than what we’re already harvesting.

    I hope there is but I doubt that there is. My sense is that the rate of progress will accelerate onlh to the extent that people see an unambiguous signal of harmful climate change in their day-to-day lives.

    I disagee with izen that the signs of climate change have become evidence in the daily events of people’s lives. And even to the extent that they might become true over one time horizon, that’s likely to shift over longer time frames. We’ve seen this pattern with the sense of urgency shifts in association with short-term weather phenomena (and economic conditions).

    I agree with Jack that there’s an inherent problem with human perceptions of the scale and rate of change.

  127. Joshua says:

    > I don’t agree. The uncertainties are epistemic and/or aleatory in nature, neither of which depend on our interests. How those uncertainties affect our decisions about what to do (how you view the uncertainties?) is where those interests are relevant, but not in the assessment of the uncertainty itself.

    I don’t think the uncertainties depend on our interests.

    I think the discussion of the uncertainties, usually, interacts with our interests. And it interacts with our positions also, and the interaction between our interests and our positions.

    I’m not saying that science or the uncertainties depend on our interests (or our positions).

  128. dikranmarsupial says:

    “Just to clarify, I’m not suggesting that the science itself changes based on interests.

    Our understanding of the science, in the real world, often does. “

    I think this is one of those is/ought problems. Our understanding of the science ought not depend on our interests. In practice, often it does, we are all human (even those at the truth-machine end of the spectrum) but usually to the detriment of our understanding of the science. We should strive to separate them for the good of both the science and our interests.

    “But more importantly, imo, is that our discussion of the science should be rooted in establishing what our interests are – ideally with establishing where our interests are shared. “

    our discussion of what we ought to do, with the science as an input to that discussion, should be rooted in establishing what our interests are, but not discussions of the science itself. Whether a line of scientific reasoning is valid or whether it is supported by the data simply does not depend on our interests and there is nothing to be gained by introducing them at that point in the discussion AFAICS.

    “I think that mostly what takes place is that people argue about positions, ”

    Absolutely. I think a large part of the problem is that people are not willing to concede scientific points that don’t fit with their position and adopt obviously incorrect scientific arguments (such as Tom’s) as a means of avoiding that concession. This is simply irrational, and often at the expense of promoting your position (see the Stuart Agnew clip I gave earlier). It would be better if people could accept that the science doesn’t support your position, concede that ground, and make a better argument where your interests are a valid consideration.

    But most of us simply cannot do that. That doesn’t make it right though or valid.

    But in my view, there isn’t any particular other approach towards progress that’s likely to bear much more fruit than what we’re already harvesting.

    I agree, if the only approach is one based on common interests, we can only solve the problem very partially, at best. But at the same time, it is no reason to adopt an irrational approach to the scientific discussion. There is no point saying something that is not true just because it is more acceptable.

    “I disagee with izen that the signs of climate change have become evidence in the daily events of people’s lives. … We’ve seen this pattern with the sense of urgency shifts in association with short-term weather phenomena (and economic conditions).”

    Indeed, it would be interesting to see if the (very steep) petrol price rises in the U.K. have had any effect on the number of miles driven on the road. It might give an indication of whether carbon taxes would actually result in the right changes in peoples activities. It hasn’t changed my driving at at all, but then again I didn’t do any non-essential travelling anyway (if I was playing cricket this season it would be a matter of opinion as to whether that was essential travel, I would argue it was ;o)

  129. dikranmarsupial says:

    Joshua “I think the discussion of the uncertainties, usually, interacts with our interests. And it interacts with our positions also, and the interaction between our interests and our positions.”

    yes, but that is a discussion about what we should do about the science, which I have already said is where the interests are very relevant. I’m not sure of the distinction that I seem to be missing.

  130. Willard says:

    > The uncertainties are epistemic and/or aleatory in nature, neither of which depend on our interests.

    Again, the reason why we focus on these specific uncertainties depends on our interests. Nobody cares about the 174th decimal of Pi, or even e. We care if next year inflation won’t wipe out a tenth of our wealth or if extreme events will make wheat’s prices skyrocket.

    Not only we need to take our own interests into account, but we must take otters’. This works even when dealing with artificial agents:

    By means of a comparative analysis of the literature on robots and virtual agents, we defend the thesis that approaching these artificial agents ‘as if’ they had intentions and forms of social, goal-oriented rationality is the only way to deal with their complexity on a daily base. Specifically, we claim that this is the only viable strategy for non-expert users to understand, predict and perhaps learn from artificial agents’ behavior in everyday social contexts. Furthermore, we argue that as long as agents are transparent about their design principles and functionality, attributing intentions to their actions is not only essential, but also ethical. Additionally, we propose design guidelines inspired by the debate over the adoption of the intentional stance.

    Source: https://doi.org/10.1007/s11023-021-09567-6

    If the concept of interest is too mushy, replace it with goals and think about how bots succeed in coordinate their actions to play soccer during the Robocup. To conceive intention as planning is is unobtrusive, more transparent than the alternatives, and quite useful.

  131. dikranmarsupial says:

    “Again, the reason why we focus on these uncertainties depend on our interests.”

    Yes, but at the point where we are deciding whether some course of (in)action is appropriate/justified given the uncertainties, but not where we are trying to determine what the uncertainties are. As ATTP says, ” the goal of science, or research, is to try and understand things.” and quantifying the uncertainties is part of that, it is intrinsically interesting. At least for some of us – some years ago I organised a machine learning challenge on predictive uncertainty (uncertainty quantification) in environmental modelling. Would have been more successful a decade later ;o)

    For planning we want an unbiased estimate of the environment in which our plans are executed. The environment doesn’t depend on our goals, but our plans do.

  132. dikranmarsupial says:

    Ideally we want to estimate the uncertainties in such a way that anybody can use them to decide whether plans are in accordance with their goals. This is sort of what the IPCC has done with its subjective assessment of ECS. If your view is that ECS is most likely to be below 3C/doubling (i.e. you are a lukewarmer) is a justification for inaction, then you can make that argument based on interests using the IPCC distribution. Whether it was convincing would depend on the quality of that argument (if we both agree on the IPCC distribution).

  133. Willard says:

    > For planning we want an unbiased estimate of the environment in which our plans are executed. The environment doesn’t depend on our goals, but our plans do.

    Agreed. You’re also right to underline intrinsic motivation. There’s no other way to lead an examined life. Even a half-examined artificial life, for that matter:

    Intrinsic motivation is often studied in the framework of computational reinforcement learning (introduced by Sutton and Barto), where the rewards that drive agent behaviour are intrinsically derived rather than externally imposed and must be learnt from the environment. Reinforcement learning is agnostic to how the reward is generated – an agent will learn a policy (action strategy) from the distribution of rewards afforded by actions and the environment. Each approach to intrinsic motivation in this scheme is essentially a different way of generating the reward function for the agent.

    https://en.wikipedia.org/wiki/Intrinsic_motivation_(artificial_intelligence)#Computational_models

    If these concepts are useful for bots, they ought to be useful for Climateball, I dare say!

  134. dikranmarsupial says:

    Just an aside, I think it would be risky to attach intentionality to systems such as GPT-3, which are basically pattern recognition systems. For instance they have been used to make systems that can write code from simple natural language specifications, but they have no understanding whatsoever of the code they have written, it is just *very* powerful pattern matching, there is no intentionality there. Only read the abstract, but it looks an interesting paper.

    ” The use contexts of everyday life necessitate making such agents understandable by laypeople.”

    they are not understood by the experts (I watched a very good lecture the other day from the founder of deep mind on alpha-fold etc), they are not going to be understandable by laypeople.

  135. Willard says:

    It’s not that hard to make people understand Alpha:

    Up to a point, of course. Nobody really knows how Alpha works, researchers included. Which is a big problem where ethical decisions are involved.

    High frequency trading is all well and good, but paraphrasing Bob Pardo, if you don’t know what your system is doing, you’re not doing your job as financial custodian.

  136. dikranmarsupial says:

    Up to a point, of course. Nobody really knows how Alpha works, researchers included. Which is a big problem where ethical decisions are involved.

    Very true! The thing about “design principles” in the abstract of the paper was interesting for non-reinforcement learning systems (which learn by e.g. self-play) as so much depends on the training data. If you train a system on data that accurately represents a society with racism, you are likely to get a racist system. It is difficult to describe your data as a “design principle”.

    The “Up to a point” is very important. I am *very* aware that most of my understanding of climate science is just an understanding of a bunch of gross simplifications/analogies that are a very long way from the research coal face.

    The paper may be a good starting point though for a future AI system that has logical reasoning (GOFAI) implemented on connectionist (pattern matching) hardware. I think it is partly that combination that make us what we are. Current AI systems are not so much Artificial Intelligence as Artificial Intuition, but that may not always be the case, and we will one day have AI programmers that actually understand what they are doing.

    Thanks for posting the link to the movie, I’ll add it to my playlist of things to watch while at the gym.

  137. russellseitz says:

    Up to a point, of course. Nobody really knows how Alpha works, researchers included.

    The same may be said of ClimateBall, for no two playing fields are alike. The Graun reports athe literary exploration of a parallel system of games is in progress:

    https://www.theguardian.com/books/2022/jul/18/super-mario-brothers-karamazov-literature-begins-to-take-gaming-seriously

    and notes another parallel— many computer games , such as Super Mario Brothers are subdivided into levels with discrete landscapes, and rules of their own, which may approximate to echo chamber territoriality in the Denialosphere, where corporate satrapys like Climate Depot and the No tricks Zone compete with Ruritanian personal fiefdoms like Wattsland and Skydragonia

  138. angech says:

    The problem with fully effective AI is twofold at least.
    One the Isaac Asimov principles.
    Two game strategy principles.
    If the concept is victory then anything goes.
    Including pulling the plug on the other AI first.
    No morals, no principles wins.
    The ends justifies the means beats the win at all costs mentality but the ends are then not worth having?

  139. dikranmarsupial says:

    In related news, Google engineer fired for claiming one of it’s AIs is sentient and has feelings:

    https://www.bbc.co.uk/news/technology-62275326

    The link to the twitter discussion and the transcript of the discussion (where the engineer and the AI discuss whether it is sentient) is quite interesting, as is the admission

    which argues that it is just a statistical pattern matcher (but a very good one) with post-hoc anthropomorphic explanation. If it has a goal, it is the appearance of having meaningful conversation, rather than actually having a meaningful conversation.

    It is interesting that emotional processing is often used as an indication of intelligence, but I suspect that is just because it is what humans do, rather than actually being a component of sentience/intelligence.

  140. izen says:

    @-D
    “which argues that it is just a statistical pattern matcher (but a very good one) with post-hoc anthropomorphic explanation. ”

    This would seem to be a very good description of a lot of humans. Unless you ascribe to the idea that there is some ‘special sauce’ that makes human sentience qualitatively different from that in AI.

  141. dikranmarsupial says:

    “This would seem to be a very good description of a lot of humans. ”

    A lot of blog discussions are not even very good pattern matchers, a lot of discussions are between first order Markov processes with no memory of what they or their interlocutor wrote before the message to which they are responding ;o)

    With the kinds of systems that are getting attention at the moment, there is no “special source”, it is more that there is little or no rational reasoning involved, they just “know” things, as I said intuition rather than intelligence. Human beings are different because we have both fast and slow thinking (c.f. Kahneman), whereas the current crop of AI are only fast thinking.

  142. izen says:

    @-D
    “Human beings are different because we have both fast and slow thinking (c.f. Kahneman), whereas the current crop of AI are only fast thinking.”

    I must admit to being less than convinced by Kahneman’s fast/slow thinking concept, other than as a very rough simple metaphor for how persons actually think. I do share with Joshua the suspicion that for much/most of the time people are seeking the path with the least cognitive dissonance between their world view and any external information. I admire your adherence to a scientific discussion of fact, but I often find that in most exchanges –

    “If it has a goal, it is the appearance of having meaningful conversation, rather than actually having a meaningful conversation.”

  143. Dave_Geologist says:

    Willard, to be fair to Javier’s response on Judy’s blog, he was triggered by the stupidest piece of stupid I’ve seen from a Creationist for years. The smart ones have learned how not to make fools of themselves or their cause.

    They too use FLICC, of course.

    To paraphrase Joshua’s

    So, if I’m right, there at people who really do lack certain skills. And there are people who have the skills in a context-specific manner. And there are people who posses the skills within relevant contexts but apply them in a way that’s effectively predicted by their ideological orientation.

    We’re not all stupid. But religion or politics makes some smart people stupid some of the time. The Venn Diagram is probably interesting. Not just “But God promised Noah he wouldn’t send another Flood”, but the well-documented attraction of certain religious groups to a certain type of politics, where they get sufficient influence to make their religion their politics and their politics their religion, even if an objective Supreme Being might stand back and say “Hang on, how does the Sermon on the Mount, the meek inheriting the Earth, or the old camel-and-eye-of-needle saw, square with that politics?”.

  144. Dave_Geologist says:

    Like Willard, I’m frustrated with Tom (and appreciate he may have left this thread but I’m playing catch-up):

    You’ve been around the climate debate for, what, well over a decade? And you still haven’t learned the basics of how GHG are supposed to work, enough to critically analyze your own claims?

    Or perhaps enlightened would be a better word (not better for your argument or POV Tom, because that enlightenment tells me not to take you seriously, even when it appears you’re being serious).

    Your interplay with dikran and ATTP reveals a depth of scientific ignorance I had not previously attributed to you. Maybe I should revise my opinion of your book on the basis that you were too ignorant to know what you were doing? Nah, ignorance is no excuse. That’s why we have reckless endangerment laws (quaintly titled, in Scotland, “Culpable and reckless conduct”, with an even better subtitle, “Reckless endangerment of the lieges”). I presume the US equivalent uses something less medieval than “lieges” 😉 .

    In the case of Robson v Spiers,[6] establishes the forseeablity of potential danger or injury to the public by the accused’s course of action. Again there need not be any physical injury to a person, only the need to demonstrate possible endangerment to the public.

    IOW if you shout “Fire” in a crowded theatre, there damn well better be a fire.

    And your argument doesn’t “lose a little force” post-2000. It disappears down the plughole, has already passed through the sewage farm, and has almost reached the ocean.

    However the crucial flaw around the paws (which I’m sure has been covered here before), is that to claim a pause, as opposed to merely claiming that there is maybe a 5% chance or so of a pause but in reality there’s probably no pause, you have to demonstrate that the P5 slope is less than or equal to zero. You can’t even do that for the P50.

    At least if you hold your own theory to the same standards you hold the mainstream one (perhaps we need another F for False Equivalence”?). Demonstrating that the existing theory has a “fatal flaw”, even if you had which you hadn’t, does not give you a free pass to take your alternative as scientifically valid, not when the best you can say for it when you apply the same rigour is that there’s maybe a 5% chance it explains reality, but a 95% chance that it doesn’t. Now that’s what I call a theory with a fatal flaw!

  145. Brandon R. Gates says:

    I can’t help but beat teh Paws zombie to death a bit more. I have a multiple regression model which allows me to quantify the contributions of various climate indicators to observed temperature trends, namely CO2, solar irradiance, ENSO, AMO, aerosol optical depth (volcanoes) and length of day anomaly. Defining the pause interval as Jan. 1999 through Jan. 2013, and using HADCRUT5 for observations, I obtain the following trends (in C/century):

    HADCRUT5 trend: 1.2
    CO2 component: 1.7
    Non-CO2 component: -0.4
    Residual: 0.1

    About half of the non-CO2 component is ENSO going from a big El Nino to a moderate La Nina (with some wiggles in between), the other half due to a decline in solar irradiance. It should surprise nobody that the model is ridiculously sensitive to end-points; when I run the analysis forward to Jan. 2021 to include the El Nino events of 2017 and 2020, the discrepancy between observation and CO2-only prediction all but disappears.

    When I run my regression model over all the years for which I have data for all contributors (1880-2021) I get a value of 2.2 C per doubling of CO2 with an R^2 value of 0.97. Using CO2 only over the same interval gives 2.5 C per doubling and R^2 of 0.90.

    Clearly natural variability cannot be ignored even over century timescales. Nor need it be a mystery attributed solely to random chance, as just a handful of quantifiable climate factors go a long way toward explaining deviations from a CO2-only temperature prediction.

  146. izen says “If it has a goal, it is the appearance of having meaningful conversation, rather than actually having a meaningful conversation.”

    I think that is too cynical for my taste. I think most of us often have a goal when we engage in meaningful conversation. We engage over things that are of interest to us, part of the interest is likely rooted in goals. I think it is entirely possible for many or most of us to be swayed to reconsider our positions and goals as a result of a meaningful conversation, even if we entered into the conversation with well-established goals.

    I think if you look at certain folks who keep presenting badly flawed arguments and never seem to absorb new information or learn where they are making mistakes, then you are likely dealing with a person who is engaging in bad faith, or with a person whose motivated reasoning overwhelms their ability to engage in critical analysis, or with a person who gets some peculiar thrill out of truly engaging in the appearance of having a meaningful conversation and tying up the time of others with endless back and forth where many of the FLICC items will be on display. Those folks are a complete waste of time imo. It is rare that I read their comments or engage with them. It seems impossible to ignore the sheer bulk of refutation that these folks can generate. That’s a special skill. Willard may have a label from climateball or somesuch that applies to these folks. Maybe they are just hardcore climateballers?

    No need to throw out the neonate with the cleaning solution. I think just ignore the usual suspects. To some extent, I think these folks poison the well for meaningful conversation because of the frustration that they produce in folks who are posting in good faith. I think we neutralize them best by simply dismissing their nonsense quickly as nonsense and moving on, but that’s just my approach. If you enjoy the endless back and forth, carry on. Whatever maintains the sea-worthiness of your marine vessel.

  147. Willard says:

    > F for False Equivalence

    False equivalence rests on an equivocation, and so goes under Ambiguity. There’s an ongoing exchange in the Decoding the Guru subreddit that illustrates how equivocation can work in various ways:

    [CLOV] Democratic socialism is an economic model that’s never worked anywhere. The vast majority of those who call themselves “democratic socialists” don’t even know what the word means and confuse it with “social democracy” by pointing to the Scandinavian countries or Europe. This is part of the problem with the left I’m talking about. It’s become this misinformed cult that insists on the right to misuse words.

    [HAMM] This is gibberish.

    [CLOV] In what specific way?

    [HAMM] Democratic socialism is not an economic model, some part of still works (think Sovereign funds), who cares about a “vast majority” that only exists figuratively speaking, and there’s no “part” of any “problem” that matters here. You’re just ranting. Focus.

    [CLOV] Democratic socialism is in fact an economic model as it literally means “socialism brought about through a democratic process”. This is a fact. Sovereign funds has nothing to do with democratic socialism. Again part of the problem on the left [*continues to punch hippies*].

    [HAMM] A political philosophy ain’t no economic model, bro.

    [CLOV] Democratic socialism ain’t no political philosophy. It literally means “socialism brought about through a democratic process.” Socialism is an economic model. I’ve done my homework, you haven’t

    [HAMM] Thy Wiki starts thus: “Democratic socialism is a political philosophy.”

    [CLOV] “that supports political democracy and some form of a socially owned economy”

    [HAMM] Yes. A political philosophy. And some form of socially owned economy. What does “some form” mean in that context?

    [CLOV] Some form means either the government, the community or workers directly control the means of production.

    [The exchange goes on and on, and then Hamm decides to claim that he never mentioned government intervention.]

    [HAMM] So no government intervention. At all.

    [CLOV] My original point had nothing to do with government intervention.

    [HAMM] You did mention government control over the means of production.

    [CLOV] You claimed my argument about democratic socialism meant government intervention.

    [HAMM] Funny you don’t quote me.

    [CLOV] You said “democratic socialism is defined as a political philosophy.”

    [HAMM] That quote shows I’m not saying that your argument about democratic socialism meant government intervention.

    [CLOV] I never claimed democratic socialism was [inaudible] government intervention.

    ***

    The example shows how the equivocation can work as a strawman, a motte-and-bailey, a false equivalence, a way to affirm the consequent, to deny the antecedent, and all kinds of implicature fails.

    When I first started, I thought that words written forever on the Internet meant people would check back what they said. Climateball proved me wrong. People seldom read, even what they wrote.

  148. izen says:

    @-sbm
    “No need to throw out the neonate with the cleaning solution. I think just ignore the usual suspects. ”

    Wise advice.
    My cynicism derives from spending too long arguing on right-wing/religious forum since 2001 on everything from WMD, Evolution to AGW.

    I also spent over forty years asking people what problems they had with particulars of their physical state and trying to determine what unrecognised problems they had. At which point I would endeavour to correct, or advise on improved self-care, for those problems.
    Clear descriptions of problems and acceptance of advice was infrequent.

    Most general conversations seem to me to be –
    “I’m fine/not fine. how are you?
    With occasional variations… can you help me/can I help you.
    But retirement has left me cynical but happy with such exchanges.
    This forum is one of the few places where real content is exchanged.
    Sometimes inadvertently.

  149. dikranmarsupial says:

    @smallbluemike “I think that is too cynical for my taste.” it isn’t cynical, it is just that it is all GPT-3 is technically capable of. “I think most of us often have a goal when we engage in meaningful conversation.” to do that, you need an AI that actually understands the content what it says, rather than just the statistics (language model).

    @izen “I must admit to being less than convinced by Kahneman’s fast/slow thinking concept” we do have cognitive skills that do not require conscious deliberation, such as walking, face recognition, speech recognition. We do those without thinking, which is Kahneman’s fast thinking, and undoubtedly those exist. There are other tasks that require us to actively think and require understanding, rather than just recognition of something familiar. Such as working out square roots using pen and paper. That is “slow thinking”. Both clearly exist, but I suspect there are processes where the distinction is not very clear.

  150. izen says:

    @-D
    “Both clearly exist, but I suspect there are processes where the distinction is not very clear. ”

    One example might be musical ability. It takes many hours, days, years of practice to gain the physical ability to play a keyboard or guitar fretboard. But when that ability is learnt, it can be used to improvise a line of harmony or melody that is the product of deep knowledge of music theory in real time. The two interact, no amount of understanding of musical theory will enable a person to play it, but playing technique without a grasp of the musical structure also leads to a content free performance.
    A Shakespeare quote seems appropriate –

    Ham) I do not well understand that. Will you play upon this pipe?
    Guil) My lord, I cannot
    Ham) I pray you
    Guil) Believe me, I cannot.
    Ham) I do beseech you.
    Guil) I know no touch of it, my lord.
    Ham) Tis as easy as lying; govern these ventages with your fingers and thumb. give it breath with your mouth, and it will most eloquent music, look you, these are the stops.
    Guil) But these cannot I command to any utterance of harmony, I have not the skill.
    Ham) Why, look you now, how unworthy a thing you make of me! You would play upon me; you would seem to know my stops. you would pluck out the heart of my mystery; you would sound me from my lowest note to the top of my compass; and there is much music, excellent voice, in this little organ; yet cannot you make it speak. Sblood, do you think I am easier to be played on than a pipe? Call me what instrument you will, though you fret me, yet you cannot play upon me.

  151. dikranmarsupial says:

    Izen – nice example – I think I’m closer to the Guildenstern end of the spectrum; I can play, but I don’t have any musicality in my playing (can’t really improvise).

    Programming (and to a lesser extent typing) is another, some parts of it are “fast thinking” – we have muscle memory used to touch type, and we have our inner language model which means we can write a for loop without really having to think about it. However, we are also working our “slow thinking” quite hard in thinking about the algorithm and expressing it well in the code. This is one of the reasons I don’t use tools (like autocompletion) to code faster, it just means I have less time to really think about the code. Tools often just help you to produce bad code more quickly – so again “fast thinking” can be deeply sub-optimal ;o)

  152. Bob Loblaw says:

    Izen and Dikran – you’re touching on nerves here.

    Amateur musician in high school era. Excellent ear, OK technique (violin and euphonium) but I did not have a future as a musician. I remember a MASH episode of that era when Captain Winchester was trying to help a musician who had lost a hand/arm in battle,and was encouraging him that he still had a future in music as a conductor. Winchester said “I can play the notes, but I can’t make the music!” Different levels of talent.

    I’m with Dikran on coding/writing, etc on computers. Auto completion is a pain in the @$$ – just keeps getting in the way when my non-touch-typist fingers start down a wrong path and brains that coded the auto completion make the wrong choices about where my brain was going. I’d spend more time telling auto completion to ^&$# off than I would save by having it occasionaly guess correctly.

    Gut feeling comes in handy at times. Not a substitute for issues that involve a complex analysis – and a complex analysis usually requires a level of understanding that was not developed as a sequence of gut instincts.

  153. Pingback: FAIL Better | …and Then There's Physics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.