I normally try to write a summary of some of the blog posts I’ve written during the year, and would typically highlight a few posts from each month. However, I’ve written so little this year, that I thought I would just highlight a few posts that I either thought were interesting, or that generated a fair amount of interest. I’ve also spent most of the day cooking, as we have guest coming for New Year’s eve. I’m already tired and the night is still young.
In January I highlighted a paper on the tragedy of climate change science, which I thought was interesting but wrong. Didn’t write much of interest in February, but I did write a post in March about ignoring the economists, that generated some interesting discussion, with some unsurprising comments.
In April I tried to explain the Greenhouse Effect again. Pointless if you’re trying to convince those that it doesn’t exist, but still an interesting thing to try and explain. I May I wrote about the science-society interface, and about a paper on World Atmospheric CO2 to which we’d written a response.
June had quite an active post about the hot-model problem, while July had a post about limits to growth. August had an active post on the importance of science communication and September had a post about the role of mathematical modelling.
October had a post about a cherry-picked analysis not demonstrating that we’re not in a climate crisis, which I had written for Skeptical Science and the cross-posted here. In November I discussed a podcast that had Andy Revkin and Bjorn Lomborg as guest, while December saw the unfortunate announcement of Victor Venema’s death.
Although I didn’t write many posts this year, the comment threads were more active than I had appreciated. Thanks to all of those who contributed. A happy New Year and a good 2023 to everyone. Now I need to have a short break before our guests arrive.
Thakns ATTP. Happy Holidays and Happy New Year. Your site is on my short list of sites to visit, perhaps because I get to go off the rails a little too often. 🙂
May your happy new year include a resolution to update your blog links, which alas still connect Rachel Squirrel readers to a poker game in Malaysia , and Stoat fanciers to the late lamented Science Blogs site
Happy New Year all especially ATTP.
Happy new year! “I’ve written so little this year” quality is more important than quantity, and there has been no problem there!
Happy New Year. Second Dikran’s comment. The big story of 2022 was the Ukraine invasion, which gave a little kick to the move away from fossil fuels.
Thanks so much to ATTP and Willard, not just for the interesting posts, but the thankless task of curating the comments section. Long may the independent internet live on, in one form or another…
Here, here!
Happy new year, and thanks for all you do!
May be worth a look for some…
https://experimentalhistory.substack.com/p/the-rise-and-fall-of-peer-review
Tom,
Yes, I had seen that. There are numerous issues with academic publishing, including the whole peer review process. However, I do tend to think of peer review as the worst possible process, apart from all others.
I also don’t really see why we can’t have a mixed model. If people want to submit there papers as PDFs to a site that doesn’t do peer-review, that’s fine. If you want to submit to a site that does, that’s also fine.
We should, though, probably recognise that peer-review isn’t really an audit, it’s mostly a kind of sanity check. Are there any obvious issues with the paper? Does it explain things clearly and provide a suitable context? Do the conclusions follow from the analysis? etc.
It’s funny, mostly I see complaints about the failure of peer review from those who fail to get their papers through peer review. ;o)
With the profusion of papers about AI at the moment, we can’t do without some form of sanity check, the main problem with peer review is finding competent peers (usually you don’t want a peer – you want someone at the next tier above, but pyramids being pyramids, that isn’t what we often get).
Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?
It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch.
I’ll stop reading here, this is just silly. It isn’t there to prevent all errors. The main purpose of peer review is to improve the papers that are published. The reviewer is your friend, giving you their time and expertise for free – a service that you couldn’t afford if you had to pay for it (I borrowed that from a paper on peer review, but I can’t remember the citation). Weeding out the most obvious nonsense is not the primary goal. It amazes me that so many people think it is just about rejecting bad papers.
rats messed up the quotes as usual – if only blog comments had pre-publication review ;o)
From Tom’s link …
“This was one of the first things I learned as a young psychologist, working under Jordan Peterson”
Well I added the last part 😀 , but that right their explains quite a lot, not that STEM is immune from the very same issues.
I will continue reading and summarize my own views, best known as … wait for it … any moment now … please be patient …
Sturgeon’s law from teh wiki … somewhat rephrased as …
“90% of everything is crap. That is true, whether you are talking about physics, chemistry, evolutionary psychology, sociology, medicine – you name it – rock music, country western. 90% of everything is crap.”
I did see quite a bit of poorly peer reviewed articles in my day though. Internal reviews were even sillier, much sillier.
“You can’t land on the moon using Aristotle’s physics,”
No, but we can, and did, land on the Moon with Newton’s Laws of physics. So there!
But we can’t, and don’t, do GPS with Newton’s Laws! So there!
But relativity isn’t, and never will be, then end of new discoveries! So there!
Dum, Dum, Dum Dum, Dum, …
https://en.wikipedia.org/wiki/All_About_Momons
Oops, mispelled Mormons above! 😀
Dum, dum, dum, dum, dum, that was supposed to be …
https://en.wikipedia.org/wiki/All_About_Morons
After you are finishing reading Tom’s link from our good friend at the Scientific Publishing Complaints Department you just must read the follow up …
https://experimentalhistory.substack.com/p/the-dance-of-the-naked-emperors
In which such butt nuggets as this appears …
“That’s also why I’m not worried about an onslaught of terrible papers—we’ve already got an onslaught of terrible papers.”
But mostly non-peer reviewed papers/books from a new breed of deniers, these are so bad they even have their own websites, like Curry Fruitcakes, Etc. and WTFUWT?
It is better to have absolutely no training whatsoever, that way you can read real science papers and textbooks even, and get things that are not even wrong! We even have fake predatory pay-to-publish PEER REVIEWED journals, now it can’t possibly get any better than that now could it?
Someone here might want to write a more serious version of my trash talking? But I momentarily can’t possibly think of who I am thinking of!
Curry’s Fruitcakes, Etc. indeed,
I am entirely harmless, do not, nor will I ever own a … and I do not fear for my life by people with …. jeez, I really do not know where to begin.
Dikran –
Yes, I picked up on the exact same part of the article and wrote a comment although I can’t find it now.
The question that the article fails to address is whether there would be more errors without peer review. It’s such an obvious question that it’s kind of remarkable that the article has received so much traction from heterodox types even though it wasn’t addressed.
Although not the least bit surprising (nor surprising that Tom thinks the article merits attention).
Peer review is obviously sub-optimal. But I think that although most people who have submitted many articles for peer review have run into requests for revision they think are inane, and also know that the process has sometimes led to improvements.
It’s sad to see so much simplistic, binary thinking.
It’s unfortunate that it’s so hard to find a better brand of contrarians.
In stark contrast, I offer the following – a course description from one of the bloggers at Andrew Gelman’s blog, where the course focuses on the related topic of reproducibility. There’s an extensive reading list that looks really interesting… I hope to look at some of the less technical material.
https://statmodeling.stat.columbia.edu/2023/01/03/explanation-and-reproducibility-in-data-driven-science-new-course/#comments
Joshua “The question that the article fails to address is whether there would be more errors without peer review.”
Exactly! While I have published papers with errors in them, none of the errors have been due to the reviewers – they have been mine.
“It’s such an obvious question that it’s kind of remarkable that the article has received so much traction from heterodox types even though it wasn’t addressed.”
I wonder if any of the “heterodox types” noticed that the author of the article has only published four papers?
It’s clear that peer-review is meant to simply be a review, not an audit and not some kind of detailed re-working of what is in the paper. I once reviewed a paper that I ended up doing so much work on, and that the authors changed so much as a result of that, that I ended up feeling that I should probably have been an author, not a reviewer 🙂
Since the article has an audio option, I tried listening for a bit….(couldn’t hang in very long)
It’s interesting that the author talks about research that has been conducted where the researchers deliberately inserted obvious and critical errors into papers and then submitted the papers to review and only 25% of the errors were caught.
I haven’t looked at that research and I’m dubious that in reality only 25% of critical and obvious errors are caught in the rap world. But I will own that is a bias on my part.
But two funny things about that. First, even if the research is an accurate representation of reality (would the failures be concentrated in some fields as compared to others?), then contrary to the conclusion that he draws from that research – thaf it shows shows peer review being a total failure – the research shows that peer review has a significantly beneficial net effect.
The second is that he’s referring to peer reviewed literature to provide evidence that peer review is a failure.
In the rap world AND in the real world.
“It’s interesting that the author talks about research that has been conducted where the researchers deliberately inserted obvious and critical errors into papers and then submitted the papers to review “
It is also interesting that it passed the ethics panel. As a reviewer, I would not take kindly to my time being wasted in that manner.
I actually looked at two of the papers he cited. Without going into detail, let’s just say his use of them references for his point is highly questionable.
Thanks for the reminder, J:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1855693/
An obvious issue is that most peer-reviewers review papers with the assumption that the authors have not intentionally inserted obvious, and critical, errors. In a sense, it’s a good faith exercise and intentionally inserting errors would seem to violate that norm.
An audit study would be possible. The biggest ethical concern might be deception:
https://journals.sagepub.com/doi/10.1177/14789299211037865
Quality Assurance is a hard problem.
Willard and Everett:
AI guru Marvin Minsky was an old friend and fan of Sturgeon’s.
When, in the course of a panel discussion on the future of science, Ted said 90% of published science papers were crap, Minsky responded with a corollary:
“And so are 95% of the remainder.”
Even that if this is true, and only 1 paper in 200 has a real impact on its field, there is still plenty of great stuff to read, as tens of thousands of peer reviewed journals publish roughly a million papers a year.
The bad news is that the torrential flow of science also means that not a week passes without tens of mind-bogglingly bad papers ending up as press releases and popular articles .
Willard –
Yes, I was revisiting that paper from Goodman and Greenland via the post at Andrew’s….which gets even more interesting given my recent interactions with Sander and ivermectin and his praise of Alexandros, and then there’s this….
https://jech.bmj.com/content/75/11/1031
Where Sander is mentioned in the dedication…and then there’s this:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4125239
Is all so weird.
Papers shouldn’t have press releases. Getting past peer review is the first step to acceptance by the research community, not the last. It’s real value is not apparent in most cases until years later, unfortunately by then it is no longer news.
Is there a higher ratio of bad scientific papers to good scientific papers than there used to be?
I have yet to see evidence that there is.
In which case maybe more bad papers is paralleled by even more, more, good papers.
Maybe we see more papers because we’re better at identifying bad science.
Even if there is a higher percentage of bad papers, maybe the diminishing % of good papers still provide a net benefit in the end.
It’s funny that the set of people who see great harm from bad papers overlaps with the set of people who argue for a positive trajectory in metrics like better agriculture leading to a lower % of humans starving.
So why is peer-reviewed a failure and why is there a replication “crises?”
Seems to me some folks tend to get stuck in binary thinking and tend to confuse sub-optimal with bad.
And who think there’s a way to avoid unintended consequences.
… Maybe we see more BAD papers because we’re better at identifying bad science.
Remember the author seems to claim that all papers are bad, or some such, due to a so-called peer review experiment.
And that it was so different from pre-WWII publications because history is totally static and never changes and post-WWII was exactly the same as pre-WWI or even the exact same thing going all the way back to the so-called invention of the so-called printing press. Sarcasm
I will also add, did this so-called author do proper sampling of their online publication? Did they release all data, viz feedback? Was a sampling questionnaire also included to determine say likability with heterodox types and homodox 😀 types, for example. Were non-scientific types a higher percentage of readers then normal peer reviewed publications?
Release the DATA!!! A-I-E-E-E-E-E-E-E-E-E-E-E-E …
You want weird, J? Here’s weird:
https://statmodeling.stat.columbia.edu/2020/09/13/2-economics-nobel-prizes-1-error/#comment-1482254
Yeah, that Eric B Rasmusen.
Interesting thread. I’m really not sure what to make of it. I like that Matt Skaggs made an appearance.
Yes,I found that post somewhat confusing. It’s not clear that the claim was as obviously wrong as Gelman seemed to imply.
I hate that “2+2=5” meme with all my heart:
Facts-and-logic bros are mostly in it for the SpeedoScience.
Anders –
> Yes,I found that post somewhat confusing. It’s not clear that the claim was as obviously wrong as Gelman seemed to imply.
Right. And a number of the commenters got into that. But Andrew’s comments further down in the thread helped to clarify his point – which wasn’t apparent to me from just reading his OP.
Joshua,
Okay, I’d missed that comment. That makes a bit more sense.
J: “Maybe we see more BAD papers because we’re better at identifying bad science.”
It might corollate with the rise of paid athletes and advertising by professions in general.