I’ve been on holiday for a week or so. While I’ve been away there’s been quite a lot of media coverage of the paper that I discussed in this post and that we discussed extensively in this pubpeer thread.
It started with this New Scientist article, then Mother Jones, then the Independent, the Express, and Daily Kos. The main discussion point is that the journal (Scientific Reports) is investigating the publication of this paper and I’m quoted as saying that it should be withdrawn.
Philip Moriarty has a nice post called sloppy science, still someone else’s problem where he argues strongly for retraction. When a paper is clearly wrong and makes no contribution to the field, why should it remain in the literature where it could still get cited and would require people putting effort into publishing formal critiques? Others, however, disagreed partly because even a paper that is wrong could advance the field, and partly because forcing a retraction could play into the hands of those who claim there are scientific gatekeepers.
My norm would be to agree with the latter. Even if something is wrong, people could still learn from it. Also, we would need to be very careful that papers with inconvenient results weren’t retracted by journals because those who objected to what the paper suggested kicked up enough of a fuss. Also, most research requires assumptions and judgements that may not be universally accepted. How do you decide if an error is sufficient for retraction and who gets to make this decision?
However, in the case of the Zharkova et al. paper, the error is completely elementary. It’s something we teach our first-year students. There is no value in debating in the literature something that has been accepted by virtually all physicists/astronomers for a very long time. The community shouldn’t have to commit time and effort to correcting a basic error made by people who really should know better. The ideal would be the authors recognising that they’d blundered and voluntarily withdrawing the paper. Since that seems unlikely, the journal deciding to do so would be the next best thing. I’m not planning to hold my breath, though.
It is interesting to compare the (lack of) response from the journal, with the exemplary behaviour of Copernicus relating to the pattern recognition fake journal debacle. Even if the latter did require a carefully-aimed email to set it off 🙂
“However, in the case of the Zharkova et al. paper, the error is completely elementary. … There is no value in debating in the literature something that has been accepted by virtually all physicists/astronomers for a very long time. ”
A bit like Essenhigh, Humlum et al, Harde, Berry etc? ;o)
I think in this situation the real problem is that I gather the journal doesn’t take comments papers. The usual way science deals with bad papers is by ignoring them, This doesn’t work though if they get media attention (for example), in which case comments papers are a handy way of explaining the lesson that we should learn from a bad paper making it through peer review. Unfortunately there are no academic rewards for publishing comments papers, even though they are useful to both the scientific community and society. If they were REF-returnable (or for instance every academic was expected/required to publish one every now and again), the post-publication peer review situation would be rather better (and some might know better than to submit their nonsense in the first place ;o).
I’m glad you pointed out the error, well worth the time spent, I hope.
I have exchanged emails with one of the editors of Scientific Reports. They are still – as I understand it – claiming to be looking into it. I may chase them up once I’m back at work.
Well, I’ve learned some things, so time well spent.
Well, as someone who only had 47 comments (I went kinda OCD) in your previous Zharkova post, I believe you have come to the correct position.
ATTP good, me too! Unfortunately one of the things I still haven’t learned is not to expect the authors of papers containing obviously incorrect work to be able to gracefully accept that they are wrong (or at all).
> I’ve been on holiday for a week or so.
It shows. Your titles are prfct.
Dr. Zharkova appears to have left the PubPeer building nine days ago (Comment #182 commented July 16th, 2019 8:03 PM and accepted July 16th, 2019 8:03 PM) and has not commented since AFAIK, leaving with this as a final sentence …
“I apologise that I cannot reply immediately as I am busy doing my duties at the meeting.”
Well, it appears that Dr. Zharkova made a presentation at IUGG 2019 (on the 17th) titled …
Solar Double Dynamo Waves and Their Effects on a Long-Term Solar Activity and Terrestrial Irradiance
The conference started on July 9th and ended on July 17th. I can only speculate (e. g. stop digging) as to Dr. Zharkova absence at PubPeer since.
There are a bunch of these (nutbar) Grand Solar Minimum websites …
Valentina Zharkova’s Critics Should Be Embarrassed
Relevant portion of above begins at ~3:12
Hard to believe she’d make such an elementary error and that it got through peer review. Every physics undergraduate knows better.
For some reason Zharkova is arguing that the JPL DE431mx does NOT include several major planets (Uranus, Neptune and Saturn). AFAIK, the relevant document …
Click to access 196C.pdf
suggests otherwise. Uranus, Neptune and Saturn have to be included for any form of barycenter (e. g. dynamic centroid)calculation at all. Heck there is even an absolute frame of reference defined and used throughout the JPL calculations.
Cherry picking a quote on some higher order interactive motions is kind of funny though. So, someone should stop digging.
This is REALLY BAD!!!
“This confirmed our suspicion that for the Earth orbit about Sun the JPL ephemeris are calculated considering the effects of only Moon, Mars, Venus, Jupiter, and 300 asteroids and did not include not of the other three large planets Saturn, Neptune and Uranus required to account for SIM (see Appendix 1). ”
So JPL included 300 asteroids because those are s-o-o-o-o-o-o-o-o-o-o important to the barycenter calculation but left out Saturn, Neptune and Uranus. I don’t think so.
It is hard to believe, which is why I spent quite a bit of time asking Valentina Zharkova to clarify what was being suggested.
Where does that quote come from?
Starts on page 12 (labeled). Go ahead and read it. It is higher order motions for extended bodies, search for the 1st instance of “extended” page 5 …
“Because the equations of motion include extended body effects not included in the n-body metric, the center of mass/energy as defined by Equation (4) is not an exact invariant. The position of the center of mass/energy moves with respect to the origin of the coordinate system by less than 1 mm/century, as shown in Figure 2. This motion is near the numerical noise of the stored ephemeris and is small compared with current measurement accuracy.
I was meaning the quote in this comment.
There are two documents; (1) nutbar GSM blog post and (2) JPL DE431 (e. g. Horizons) and DE430
Click to access 196C.pdf
(the JPL PubPeer thread would have my post of that link 1st timeline-wise)
Does that help any?
Just a warning. Because I was so OCD on this, I’ve found a plethora of Grand Solar Minimum websites (dozens and dozens and dozens). Same goes for YouTube (dozens and dozens and dozens).
Don’t even try to argue with those Flat Earth types, on their sites or on their terms.
Before the Zharkova paper showed up, I had n-o-o-o-o-o-o-o-o idea at all that this stuff existed.
Thanks, got it now.
Thanks for linking to the “Sloppy Science…” blog post. I guess my core misgiving, beyond absolute boll**ks being published — and I’ve just blogged about the failure of peer review for a paper that makes Zharkova et al.’s work look like the very highest form of scholarship — is the disconnect between experiment and theory.
If we acquire experimental data that are just noise, and then unknowingly misinterpret that noise — here’s an example of what I mean — should that paper remain as a valid contribution to the scientific literature? The experimental data are just noise. Even if a credible theory is postulated to explain the data…the measurements are noise.
If the argument is that we can publish a theory based on the misinterpretation of pure noise, then the feedback loop between experiment and theory is completely bust…
Yes, I agree that that is indeed an issue and I can’t really see the value in papers like that remaining in the literature (well, unless it’s made very clear what was done so that others are aware and, hopefully, avoid doing the same themselves). Ideally, the authors should acknowledge the error and withdraw. What I’m considering is a scenario where the authors refuse. What process do we use to decide that a paper should be retracted when the authors refuse to do so themselves? Maybe, as your most recent post suggests, there could be a post-publication peer-review which could then lead to the retraction of a paper.
If both the authors and the publishers refuse to retract there’s effectively nothing we can do except critique the paper. Post-publication peer review is then essential. Your critique of Zharkova et al. via PubPeer is a great example of the value of PPPR.
@-Everett F Sargent
“I’ve found a plethora of Grand Solar Minimum websites (dozens and dozens and dozens). Same goes for YouTube (dozens and dozens and dozens).”
My skim of the subject found the same, a multitude of sites pushing this same concept that some aspect of the solar inertial motion and the magnetic dynamo effect ‘explains’ climate change along with a smorgasbord of other effects from Earthquakes to Bigfoot sightings.
Zharakova is just one of the more ‘credible scientists’ pushing this notion.
The chances of retraction are zero, and any attempt to push for it just strengthens the belief in those who are adherents of this conspiracy theory.
It is, like the Flat Earth, or Moon hoax conspiracies, an extended narrative that maintains a certain internal consistency while being almost completely divorced from mainstream science, and reality.
As such it is completely impervious to refutation. Any evidence that does not refute it is taken as confirmation, any evidence that does refute it is taken as confirmation there is a global conspiracy to suppress the ‘TRUTH’.
It is NOT just a mistake or D-K error by the occasional emeritus.
“But the JPL ephemeris…” is an ineffective and inappropriate response.
Don’t mistake the SIM and magnetic dynamo ideas as just bad science, it is part of a very different type of human belief system. Push against it and sooner or later you are likely to be linked to UN agenda 21, the Illuminati and drinking the blood of babies in the basements of pizza parlours.
For those who don’t know about extended bodies (including Zharkova) …
Gravitational Field Strength due to a Point Mass and Extended Bodies
Note to self: I’m seeing Jupiter, Saturn, Uranus and Neptune in my JPL 20K year Sun-Earth (center-to-center) distance time series at their correct orbital periods (for each body within the frequency limits imposed ~1/19996 years (often called the base frequency))
@-Everett F Sargent
The most obvious refutation of the Zharkova claim that the JPL data is wrong about the Earth orbital distance is that if it was then our ability to send probes to Mercury, Venus, Mars, Jupiter and its moons, Saturn, Neptune, Uranus, and Pluto and even land on the Moon would be compromised.
However given the nature of the dellusory construct, I am sure that some ad hoc explanation ranging from unseen minor adjustments to a full-blown conspiracy to doctor the data would be deployed.
Alternatively some within the clique will accept the accuracy of the JPL data, but still insist that the solar magnetic dynamo dependence on SIM is still valid.
The best counter to this nonsense is to split the factions into those claiming the known orbital movements are wrong, and those who accept that aspect of mainstream knowledge, but adhere to the ‘magnetic’ story.
All I know is that Scientific Reports better have read the PubPeer comments, this blog and said links (that GSM link should be the show stopper) that prove that the Zharkova paper is not even wrong.
Either that, or from here on out, it could be crank magnet central at Scientific Reports.
you have no idea how many sun nuts are out there.
say barycenter three times and they all come running…
A key detail for me is that Zharkova et al. didn’t just make erroneous calculations, in some circumstances they didn’t do any calculations at all. In some instances when they state “remarkable resemblance” or “likely” there are no supporting statistics or calculations.
That absence, in addition to the errors, suggests retraction may well be the correct path.
Steven Mosher said:
I’m more of a gravitational forcing nut, but can understand why that is not a prime climate skeptic angle — since the sun always holds promise for a virtually unlimited energy boost, while direct forcing from the moon is clearly a dead-weight from an inert body. The latter can only lead to a zero-sum cyclic variability of the longest period tidal cycles. So the skeptics try the “just-so” story of invoking the g-pull of the planets on the sun, which then somehow changes its internal solar dynamo character and thus transitively impacts the earth. Unfortunately Zharkova went rogue and exaggerated well beyond a defensible point.
One thing learned from this episode is that it is much easier to debunk exaggerated claims than to confirm attribution of the subtleties and zero-sum effects that exist in the climate system. Alas, that’s where the real value-added action is and Zharkova remains just a sideshow.
What I’m finding bizarre is the figure from Geoff Sharp that Zharkova seems to claim she generated herself, but that she’s using to support her position, when it very clearly contradicts what she’s saying.
Spot the errors …
Shepherd, et. el. (2014)
If all four show up then Zharkova just takes the absolute value of the 1st graph to generate the 2nd graph. The last two graphs show TSI and/or SSN. The zero point nodes in Zharkova pass usuallu hit their maximum slope at those zero points. While TSI approaches the minima at the flat (zero slope) of the TSI time series. Maximum or steep slopes != minimum or zero slopes.
Soooo…. the peak-to-trough variation in solar forcing is about 0.25 W/m2. And yet (IPCC 2018)
And that value matches, within error bounds, observations over a century or so. And best matches in the last few decades, where data is less sparse and less prone to systematic offsets between measurement techniques.So even if it woz the Sun wot dun it, the magnitude of the forcing is an order of magnitude too small. Unless ECS is 25-30C. Time to panic right enough! As soon as whichever unicorns have cancelled out the effects of GHGs and aerosols depart for pastures new, we’re in for >10C of rapid warming. Even if we decarbonise.
That 0.25 W m-2 is for the top of the atmosphere. Divide by four to account for how Earth isn’t flat, then take away the 30% or so reflected sunlight and you get the ~0.04 W m-2 heating change Earth experiences.
So, not so much a mouse next to an elephant, more like a flea.
Mark: Peak-to-peak TSI is about 1 W/m2 at the TOA:
Mark, I think the TSI numbers are given for 1 AU, and there’s a variance of about 1 W/m2 from peak-to-peak.
Yes, I thought the peak-to-peak in solar insolation was maybe around 1 W/m^2. Take away 30% for albedo and divide by to account for the Earth being a sphere and you get a variation in solar forcing of about 0.18 W/m^2. However, the point stays the same. It’s much smaller than the change in anthropogenic forcing.
OK, so you all are going just a bit above my pay grade (seriously). 🙂
That last graph is from here …
What Path is the Real World Following?
Makiko Sato & James Hansen
The TSI values are good through 6/30/2019
They also showsAll (Climate) Forcings (last modified 2017/05/16) …
“The adjusted forcing, Fa is most commonly used, but here we will give the “effective forcing” which is defined as Fe = Ea x Fa =~ΔTs / 0.463⚬C/(W/m2), where Ea is the efficacy and ΔTs is the global surface air temperature change in response to the forcing agent. (See the references below.) Fe provides a good prediction of the response to different forcing amounts. The time dependent effective forcings relative to 1850 for the agents used in our computations are shown individually and as the total in the graph below.”
Also note how the Makiko Sato & James Hansen graphs ALL use common x-axes (for direct comparison purposes) and the y-axes are NOT in ‘so called’ arbitrary units.
Compare to Zharkova, et. al. (2019). Sad, misleading and no ability to directly compare.
OCD yeah, OCD yeah, OCD yeah, …
JPL Horizons – JPL Solar System Dynamics – NASA
JPL Horizons (Version 3.75) Apr 04, 2013
“The Horizons On-Line Ephemeris System provides access to key solar system data and dynamic production of highly accurate ephemerides for solar system objects. This includes 611,000+ asteroids, 3200 comets, 176 natural satellites, all planets, the Sun, more than 60 select spacecraft, and dynamical points such as Earth-Sun L1, L2, L4, L5, and system barycenters. Users may conduct parameter searches of the comet/asteroid database, finding objects matching combinations of up to 42 different parameters. Users may define and integrate their own objects. Rise, transit and set may be identified to the nearest minute. When used with Sun and Moon sky-brightness data, observing windows can be identified. Close-approaches by asteroids and comets to the planets, Ceres, Pallas, and Vesta, can be rapidly identified along with the encounter uncertainties and impact probabilities. Orbital uncertainties can be computed for asteroids and comets.
More than 100 different observational and physical aspect quantities can be requested as a function of time for both topocentric and geocentric observers, in one of 9 coordinate systems and 4 time scales (CT, TT, UT, Civil). 1500 Earth station locations are available, along with sites on other major bodies. Users may search for (or define) topocentric site coordinates on any planet or natural satellite with a known rotational model. Spacecraft-based observations are also supported. Output is suitable for observers, mission planners and other researchers, although this determination is ultimately the user’s responsibility. The underlying planet/satellite ephemerides and small-body osculating elements are the same ones used at JPL for radar astronomy, mission planning and
So, what part of “all planets” and “this determination is ultimately the user’s responsibility” doesn’t Dr. Zharkova, et. al. (2019) understand?
Judging from the pub-peer thread where Zharkova avoids answering simple questions and tops if off by plagiarising a graph from the website of someone else in the thread (what are the chances!)… Looks like something else than a failure in understanding, but what do I know. In any case very entertaining read, even if an absolutely amazing timewaster.
But is it a timewaster? Is countering Zharkova et al.’s basic errors a complete waste of time?
I asked myself that question a lot when I was embroiled in the saga associated with this PubPeer thread: https://pubpeer.com/publications/B02C5ED24DB280ABD0FCC59B872D04
In the end, I think we’re duty bound to critique papers where fundamental errors in understanding, calculations, analysis, and/or experimental methodology have been made. That’s how science is meant to work, after all?
But the problem is that spending time criticising Zharkova et al. (or the authors of the “stripy nanoparticle” papers we critiqued — over a year for that paper to get to publication) is not exactly going to give us a 4* REF “output”. Nor is it going to win a PhD researcher a postdoc position, or a postdoc a permanent lectureship/fellowship. There is absolutely no kudos in the system for critiquing others’ results, nor, remarkably, for ensuring that others’ work ins reproducible.
It it’s not “ground-breaking”, “pioneering”, “step-changing”, “paradigm shifting”, front cover of glossy journal material, why bother? Just a waste of time…
Indeed, but the same could be said for writing this blog or engaging on social media. I’ve simply decided that I kind of enjoy it (not always), I’ve learned a lot doing so (and still do), and I think there is some value in doing so (I hope so, at least). That’s it, really. In some sense, though, engaging in all these type of things might have wasted a fair amount of my time, but it has made me think about science more deeply than if I’d simply stuck to trying to publish 4* REF papers, and get funded so that I could recruit PhD students and postdocs. I would like to think that that may lead to me doing better research.
I agree entirely, ATTP. I similarly spend quite a bit of time engaging because…
(a) Like you, I think I learn a lot (and not just about science — the psychology is often as important as the physics! https://muircheartblog.wordpress.com/2018/07/08/the-truth-the-whole-truth-and-nothing-but/ )
(b) We’re publicly funded and we have an obligation to engage, as I see it.
(c) It is indeed fun. Some of the time. Quite how you kept going in the face of the worst of the online f**kwittery you had to put up with, I don’t know: https://andthentheresphysics.wordpress.com/2017/09/03/a-retrospective-about-engaging-online/
I killed my Twitter account a few years ago and have never looked back or regretted it even once…
But the issue is that competition for academic positions is so much more intense than when I got my lectureship. No matter how much lip service is paid by universities to the importance of teaching and public engagement — and, to be fair, sometimes it is more than lip service — what matters in terms of securing a fellowship (which is pretty much the route into a permanent position, at leats in physics) is the research outputs.
Take two postdocs. One has spent two years attempting to reproduce the experiments of another group and has identified significant flaws and misinterpretations in their work. She publishes a critique of that group’s work in PLOS ONE. The other has carried out a novel experiment and got it published in Nature/Science/PRL/[prestigious journal of choice].
Who stands a much better chance of securing the fellowship?
Reproducibility should be the bedrock of science but there is absolutely no driver in the academic reward system to repeat experiments/analyses. In fact, writing a grant proposal to repeat an experiment wouldn’t even be seen as “incremental”. It would get very short shrift from reviewers.
Yes, Twitter is a bit of a time sink for me. I need to try and use it more strategically (as I’m discovering today 🙂 ).
I feel the same way; it does seem much more intense now than when I was looking for a permanent position. I also agree that someone’s chances of getting a position is going to depend more on the perceived impact of their work than on the quality (not that high impact work can’t be of a high quality, but someone who has done high-quality work that ends up – due to no fault of their own – not being in a high impact journal will have more trouble than someone who gets that high impact publication).
Philip “Nor is it going to win a PhD researcher a postdoc position, or a postdoc a permanent lectureship/fellowship.”
but it could be a very useful part of a PhD student’s training. It requires critical thinking and analysis (which you can then better apply to your own work) and it instills the important idea that just because something is in a peer-reviewed journal, that doesn’t mean it is in any sense reliable. If it were a requirement of a PhD that you demonstrate your critical analysis skills by publishing a comments paper, there would be a disincentive to publish nonsense.
“There is absolutely no kudos in the system for critiquing others’ results, nor, remarkably, for ensuring that others’ work ins reproducible.”
Indeed, but there should be, and we are more likely to achieve that if we make an effort to show that it can be done and that it does have benefits. I’d argue that two of my comments papers; one on the carbon cycle and one about dinosaur’s body mass have been two of my more useful contributions. Not very glamorous or well-cited, but more useful than a lot of the other papers I have written.
Philip “In fact, writing a grant proposal to repeat an experiment wouldn’t even be seen as “incremental”. It would get very short shrift from reviewers.”
The fact that novel but incremental work gets short shrift is a problem – it is the way most scientific progress is actually made. Lots of “exciting” blue skies research ends up being wrong, or a dead end or a damp squib.
I agree entirely. But the issue, pragramatically, is as described in this: https://blogs.lse.ac.uk/impactofsocialsciences/2016/03/14/addicted-to-the-brand-the-hypocrisy-of-a-publishing-academic/
Agree with many points. I find no longer being interested in promotion helps ;o) My aim is to do as good work as I can and publish it in journals where other researchers will be able to see it and use it. The journal I want to publish in most is the Journal of Machine Learning Research, which is open access and free for both author and reader, and runs on very modest contributions from MIT for administration. It had “brand” from day 1 because the editorial board were (pretty much) all the top people in the field, and it is all run on volunteer effort (much like most commericial publishers). It is O.K. for science subjects where most authors can do the typesetting for themselves. The only reason we don’t have more journals like this (and price the commercial publishers out of the market) is that in most fields, I suspect that the top people are mostly concentrating on their own research, rather than on helping to publish other peoples. I would volunteer, but I am not in that bracket. If we wanted public money to be well spent, spend it supporting journals like JMLR.
A real benefit of critiquing papers post-peer-review is that it’s always a possibility that someone is close to a breakthrough and your small corrective insight will get you a hyphenated-name-discovery. Many famous examples of this occurring over the years.
Philip “In fact, writing a grant proposal to repeat an experiment wouldn’t even be seen as “incremental”. It would get very short shrift from reviewers.”
Eli has fought wars on review panels to keep funding important but boring data sets rather than spicy new and speculative stuff.
As do many of us, Eli. But it’s an uphill battle, right?
And reviewers can’t agree on just what is especially spicy/pioneering/paradigm-shifting/world-leading/universe-changing in any case… https://muircheartblog.wordpress.com/2019/06/27/at-sixes-and-sevens-about-3-and-4/
“Others, however, disagreed partly because even a paper that is wrong could advance the field, and partly because forcing a retraction could play into the hands of those who claim there are scientific gatekeepers.”
Even if the term suggests it, there is not much gatekeeping involved. The article is still available, it only gets flagged as retracted to warn the innocent reader. Someone who thinks they have enough expertise can ignore the flag.
A retraction can be useful when the authors would like to write a new paper avoiding the mistake. I feel that retractions initiated by the journal should be rare, cases of fraud or scientific misconduct. A paper being bad should have a reason not to published, but is not enough to retract.
A better solution to this (and a lot more) is to have an open post-publication system for all papers.
Doing it only for bad papers, like PubPeer, is not enough because you never know whether a paper is good or to obscure to be in PubPeer. Plus if all papers are reviewed people are more like to have a look at the reviews than when only a few percent is reviewed.
Pingback: 2019: A year in review | …and Then There's Physics
Pingback: Zharkova et al.: an update | …and Then There's Physics
Pingback: Zharkova et al. – retracted | …and Then There's Physics
Pingback: Sunblock Applied: Zharkova’s 2019 study claiming sun’s wobble causing warming finally retracted | Red, Green, and Blue