2022: A year in review

I normally try to write a summary of some of the blog posts I’ve written during the year, and would typically highlight a few posts from each month. However, I’ve written so little this year, that I thought I would just highlight a few posts that I either thought were interesting, or that generated a fair amount of interest. I’ve also spent most of the day cooking, as we have guest coming for New Year’s eve. I’m already tired and the night is still young.

In January I highlighted a paper on the tragedy of climate change science, which I thought was interesting but wrong. Didn’t write much of interest in February, but I did write a post in March about ignoring the economists, that generated some interesting discussion, with some unsurprising comments.

In April I tried to explain the Greenhouse Effect again. Pointless if you’re trying to convince those that it doesn’t exist, but still an interesting thing to try and explain. I May I wrote about the science-society interface, and about a paper on World Atmospheric CO2 to which we’d written a response.

June had quite an active post about the hot-model problem, while July had a post about limits to growth. August had an active post on the importance of science communication and September had a post about the role of mathematical modelling.

October had a post about a cherry-picked analysis not demonstrating that we’re not in a climate crisis, which I had written for Skeptical Science and the cross-posted here. In November I discussed a podcast that had Andy Revkin and Bjorn Lomborg as guest, while December saw the unfortunate announcement of Victor Venema’s death.

Although I didn’t write many posts this year, the comment threads were more active than I had appreciated. Thanks to all of those who contributed. A happy New Year and a good 2023 to everyone. Now I need to have a short break before our guests arrive.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

42 Responses to 2022: A year in review

  1. Thakns ATTP. Happy Holidays and Happy New Year. Your site is on my short list of sites to visit, perhaps because I get to go off the rails a little too often. 🙂

  2. russellseitz says:

    May your happy new year include a resolution to update your blog links, which alas still connect Rachel Squirrel readers to a poker game in Malaysia , and Stoat fanciers to the late lamented Science Blogs site

  3. angech says:

    Happy New Year all especially ATTP.

  4. dikranmarsupial says:

    Happy new year! “I’ve written so little this year” quality is more important than quantity, and there has been no problem there!

  5. Chubbs says:

    Happy New Year. Second Dikran’s comment. The big story of 2022 was the Ukraine invasion, which gave a little kick to the move away from fossil fuels.

  6. Ben McMillan says:

    Thanks so much to ATTP and Willard, not just for the interesting posts, but the thankless task of curating the comments section. Long may the independent internet live on, in one form or another…

  7. russellseitz says:

    Here, here!

  8. Happy new year, and thanks for all you do!

  9. Tom,
    Yes, I had seen that. There are numerous issues with academic publishing, including the whole peer review process. However, I do tend to think of peer review as the worst possible process, apart from all others.

    I also don’t really see why we can’t have a mixed model. If people want to submit there papers as PDFs to a site that doesn’t do peer-review, that’s fine. If you want to submit to a site that does, that’s also fine.

    We should, though, probably recognise that peer-review isn’t really an audit, it’s mostly a kind of sanity check. Are there any obvious issues with the paper? Does it explain things clearly and provide a suitable context? Do the conclusions follow from the analysis? etc.

  10. dikranmarsupial says:

    It’s funny, mostly I see complaints about the failure of peer review from those who fail to get their papers through peer review. ;o)

    With the profusion of papers about AI at the moment, we can’t do without some form of sanity check, the main problem with peer review is finding competent peers (usually you don’t want a peer – you want someone at the next tier above, but pyramids being pyramids, that isn’t what we often get).

    Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

    It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch.

    I’ll stop reading here, this is just silly. It isn’t there to prevent all errors. The main purpose of peer review is to improve the papers that are published. The reviewer is your friend, giving you their time and expertise for free – a service that you couldn’t afford if you had to pay for it (I borrowed that from a paper on peer review, but I can’t remember the citation). Weeding out the most obvious nonsense is not the primary goal. It amazes me that so many people think it is just about rejecting bad papers.

  11. dikranmarsupial says:

    rats messed up the quotes as usual – if only blog comments had pre-publication review ;o)

  12. From Tom’s link …

    “This was one of the first things I learned as a young psychologist, working under Jordan Peterson”

    Well I added the last part 😀 , but that right their explains quite a lot, not that STEM is immune from the very same issues.

    I will continue reading and summarize my own views, best known as … wait for it … any moment now … please be patient …

    Sturgeon’s law from teh wiki … somewhat rephrased as …

    “90% of everything is crap. That is true, whether you are talking about physics, chemistry, evolutionary psychology, sociology, medicine – you name it – rock music, country western. 90% of everything is crap.”

    I did see quite a bit of poorly peer reviewed articles in my day though. Internal reviews were even sillier, much sillier.

  13. “You can’t land on the moon using Aristotle’s physics,”

    No, but we can, and did, land on the Moon with Newton’s Laws of physics. So there!

    But we can’t, and don’t, do GPS with Newton’s Laws! So there!

    But relativity isn’t, and never will be, then end of new discoveries! So there!

    Dum, Dum, Dum Dum, Dum, …

    https://en.wikipedia.org/wiki/All_About_Momons

    Oops, mispelled Mormons above! 😀

  14. Dum, dum, dum, dum, dum, that was supposed to be …

    https://en.wikipedia.org/wiki/All_About_Morons

    :/

  15. After you are finishing reading Tom’s link from our good friend at the Scientific Publishing Complaints Department you just must read the follow up …

    https://experimentalhistory.substack.com/p/the-dance-of-the-naked-emperors

    In which such butt nuggets as this appears …

    “That’s also why I’m not worried about an onslaught of terrible papers—we’ve already got an onslaught of terrible papers.”

    But mostly non-peer reviewed papers/books from a new breed of deniers, these are so bad they even have their own websites, like Curry Fruitcakes, Etc. and WTFUWT?

    It is better to have absolutely no training whatsoever, that way you can read real science papers and textbooks even, and get things that are not even wrong! We even have fake predatory pay-to-publish PEER REVIEWED journals, now it can’t possibly get any better than that now could it? :/

    Someone here might want to write a more serious version of my trash talking? But I momentarily can’t possibly think of who I am thinking of!

  16. Curry’s Fruitcakes, Etc. indeed,

    I am entirely harmless, do not, nor will I ever own a … and I do not fear for my life by people with …. jeez, I really do not know where to begin.

  17. Joshua says:

    Dikran –

    Yes, I picked up on the exact same part of the article and wrote a comment although I can’t find it now.

    The question that the article fails to address is whether there would be more errors without peer review. It’s such an obvious question that it’s kind of remarkable that the article has received so much traction from heterodox types even though it wasn’t addressed.

    Although not the least bit surprising (nor surprising that Tom thinks the article merits attention).

    Peer review is obviously sub-optimal. But I think that although most people who have submitted many articles for peer review have run into requests for revision they think are inane, and also know that the process has sometimes led to improvements.

    It’s sad to see so much simplistic, binary thinking.

  18. Joshua says:

    It’s unfortunate that it’s so hard to find a better brand of contrarians.

  19. Joshua says:

    In stark contrast, I offer the following – a course description from one of the bloggers at Andrew Gelman’s blog, where the course focuses on the related topic of reproducibility. There’s an extensive reading list that looks really interesting… I hope to look at some of the less technical material.

    https://statmodeling.stat.columbia.edu/2023/01/03/explanation-and-reproducibility-in-data-driven-science-new-course/#comments

  20. dikranmarsupial says:

    Joshua “The question that the article fails to address is whether there would be more errors without peer review.”

    Exactly! While I have published papers with errors in them, none of the errors have been due to the reviewers – they have been mine.

    “It’s such an obvious question that it’s kind of remarkable that the article has received so much traction from heterodox types even though it wasn’t addressed.”

    I wonder if any of the “heterodox types” noticed that the author of the article has only published four papers?

  21. It’s clear that peer-review is meant to simply be a review, not an audit and not some kind of detailed re-working of what is in the paper. I once reviewed a paper that I ended up doing so much work on, and that the authors changed so much as a result of that, that I ended up feeling that I should probably have been an author, not a reviewer 🙂

  22. Joshua says:

    Since the article has an audio option, I tried listening for a bit….(couldn’t hang in very long)

    It’s interesting that the author talks about research that has been conducted where the researchers deliberately inserted obvious and critical errors into papers and then submitted the papers to review and only 25% of the errors were caught.

    I haven’t looked at that research and I’m dubious that in reality only 25% of critical and obvious errors are caught in the rap world. But I will own that is a bias on my part.

    But two funny things about that. First, even if the research is an accurate representation of reality (would the failures be concentrated in some fields as compared to others?), then contrary to the conclusion that he draws from that research – thaf it shows shows peer review being a total failure – the research shows that peer review has a significantly beneficial net effect.

    The second is that he’s referring to peer reviewed literature to provide evidence that peer review is a failure.

  23. Joshua says:

    In the rap world AND in the real world.

  24. dikranmarsupial says:

    “It’s interesting that the author talks about research that has been conducted where the researchers deliberately inserted obvious and critical errors into papers and then submitted the papers to review “

    It is also interesting that it passed the ethics panel. As a reviewer, I would not take kindly to my time being wasted in that manner.

  25. Joshua says:

    I actually looked at two of the papers he cited. Without going into detail, let’s just say his use of them references for his point is highly questionable.

  26. Willard says:

    Thanks for the reminder, J:

    Unfortunately, while we agree that there are more false claims than many would suspect—based both on poor study design, misinterpretation of p-values, and perhaps analytic manipulation—the mathematical argument in the PLoS Medicine paper underlying the “proof” of the title’s claim has a degree of circularity. As we show in detail in a separately published paper [2], Dr. Ioannidis utilizes a mathematical model that severely diminishes the evidential value of studies—even meta-analyses—such that none can produce more than modest evidence against the null hypothesis, and most are far weaker. This is why, in the offered “proof,” the only study types that achieve a posterior probability of 50% or more (large RCTs [randomized controlled trials] and meta-analysis of RCTs) are those to which a prior probability of 50% or more are assigned. So the model employed cannot be considered a proof that most published claims are untrue, but is rather a claim that no study or combination of studies can ever provide convincing evidence.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1855693/

  27. An obvious issue is that most peer-reviewers review papers with the assumption that the authors have not intentionally inserted obvious, and critical, errors. In a sense, it’s a good faith exercise and intentionally inserting errors would seem to violate that norm.

  28. Willard says:

    An audit study would be possible. The biggest ethical concern might be deception:

    Audit experiments examining the responsiveness of public officials have become an increasingly popular tool used by political scientists. While these studies have brought significant insight into how public officials respond to different types of constituents, particularly those from minority and disadvantaged backgrounds, audit studies have also been controversial due to their frequent use of deception. Scholars have justified the use of deception by arguing that the benefits of audit studies ultimately outweigh the costs of deceptive practices. Do all audit experiments require the use of deception? This article reviews audit study designs differing in their amount of deception. It then discusses the organizational and logistical challenges of a UK study design where all letters were solicited from MPs’ actual constituents (so-called confederates) and reflected those constituents’ genuine opinions. We call on researchers to avoid deception, unless necessary, and engage in ethical design innovation of their audit experiments, on ethics review boards to raise the level of justification of needed studies involving fake identities and misrepresentation, and on journal editors and reviewers to require researchers to justify in detail which forms of deception were unavoidable.

    https://journals.sagepub.com/doi/10.1177/14789299211037865

    Quality Assurance is a hard problem.

  29. russellseitz says:

    Willard and Everett:

    AI guru Marvin Minsky was an old friend and fan of Sturgeon’s.
    When, in the course of a panel discussion on the future of science, Ted said 90% of published science papers were crap, Minsky responded with a corollary:

    “And so are 95% of the remainder.”

    Even that if this is true, and only 1 paper in 200 has a real impact on its field, there is still plenty of great stuff to read, as tens of thousands of peer reviewed journals publish roughly a million papers a year.

    The bad news is that the torrential flow of science also means that not a week passes without tens of mind-bogglingly bad papers ending up as press releases and popular articles .

  30. Joshua says:

    Willard –

    Yes, I was revisiting that paper from Goodman and Greenland via the post at Andrew’s….which gets even more interesting given my recent interactions with Sander and ivermectin and his praise of Alexandros, and then there’s this….

    https://jech.bmj.com/content/75/11/1031

    Where Sander is mentioned in the dedication…and then there’s this:

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4125239

    Is all so weird.

  31. dikranmarsupial says:

    The bad news is that the torrential flow of science also means that not a week passes without tens of mind-bogglingly bad papers ending up as press releases and popular articles .

    Papers shouldn’t have press releases. Getting past peer review is the first step to acceptance by the research community, not the last. It’s real value is not apparent in most cases until years later, unfortunately by then it is no longer news.

  32. Joshua says:

    Is there a higher ratio of bad scientific papers to good scientific papers than there used to be?

    I have yet to see evidence that there is.

    In which case maybe more bad papers is paralleled by even more, more, good papers.

    Maybe we see more papers because we’re better at identifying bad science.

    Even if there is a higher percentage of bad papers, maybe the diminishing % of good papers still provide a net benefit in the end.

    It’s funny that the set of people who see great harm from bad papers overlaps with the set of people who argue for a positive trajectory in metrics like better agriculture leading to a lower % of humans starving.

    So why is peer-reviewed a failure and why is there a replication “crises?”

    Seems to me some folks tend to get stuck in binary thinking and tend to confuse sub-optimal with bad.

    And who think there’s a way to avoid unintended consequences.

  33. Joshua says:

    … Maybe we see more BAD papers because we’re better at identifying bad science.

  34. Remember the author seems to claim that all papers are bad, or some such, due to a so-called peer review experiment.

    And that it was so different from pre-WWII publications because history is totally static and never changes and post-WWII was exactly the same as pre-WWI or even the exact same thing going all the way back to the so-called invention of the so-called printing press. Sarcasm

    I will also add, did this so-called author do proper sampling of their online publication? Did they release all data, viz feedback? Was a sampling questionnaire also included to determine say likability with heterodox types and homodox 😀 types, for example. Were non-scientific types a higher percentage of readers then normal peer reviewed publications?

    Release the DATA!!! A-I-E-E-E-E-E-E-E-E-E-E-E-E …

  35. Willard says:

    You want weird, J? Here’s weird:

    Eric B Rasmusen on September 16, 2020 9:10 AM at 9:10 am said:

    Asher is right. Samuelson and Nordhaus were making a political statement, in an effort to sound sagely moderate between the extremes of marxism and capitalism, which was unscholarly, since the truth often *is* one extreme or the other. In everyday life, if half the newspapers says 2+2=4 and half say 2+2=5, most people think saying 2+2=4.5 is the praiseworthy, moderate, position. But their “thriving” statement, while conveying falsehood and blameworthy, has a correct interpretation too. The Soviet Union was much richer in 1989 than it was in 1919. It had industrialized. It had radios and TVs and airliners. It showed an important fact that economists should emphasize: no matter how bad your economic policies, it’s really hard to keep technical change and regular investment from making your country richer over time. You may only get half the growth of your neighbors, but it’s still growth.

    https://statmodeling.stat.columbia.edu/2020/09/13/2-economics-nobel-prizes-1-error/#comment-1482254

    Yeah, that Eric B Rasmusen.

  36. Joshua says:

    Interesting thread. I’m really not sure what to make of it. I like that Matt Skaggs made an appearance.

  37. Yes,I found that post somewhat confusing. It’s not clear that the claim was as obviously wrong as Gelman seemed to imply.

  38. Willard says:

    I hate that “2+2=5” meme with all my heart:

    Facts-and-logic bros are mostly in it for the SpeedoScience.

  39. Joshua says:

    Anders –

    > Yes,I found that post somewhat confusing. It’s not clear that the claim was as obviously wrong as Gelman seemed to imply.

    Right. And a number of the commenters got into that. But Andrew’s comments further down in the thread helped to clarify his point – which wasn’t apparent to me from just reading his OP.

  40. Joshua,
    Okay, I’d missed that comment. That makes a bit more sense.

  41. russellseitz says:

    J: “Maybe we see more BAD papers because we’re better at identifying bad science.”

    It might corollate with the rise of paid athletes and advertising by professions in general.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.