There are quite often claims that there are significant biases in science and that this is strongly influencing research results. Typically this is based on known problems in certain fields; the replication crisis in psychology, or the failure to publish negative results in medicine. My problem with this is that how research is conducted can vary greatly across different disciplines, and so using isolated examples to infer a major problem across all research areas may not be justified.
Joshua, however, has made me aware of a paper that does a [m]eta-assessment of bias in science, by Fanelli, Costas, and Ioannidis (also discussed in this article). They looked at a large sample of meta-analyses that considered a number of different bias-related patterns, and also considered various risk factors. The basic results was essentially that
The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them.
So, the biggest biases were associated with small studies that reported effects of large magnitude, studies published early because of an extreme results, studies that ended up being highly cited (although I’m not sure how this can be a bias, given that this can’t be known in advance), and studies published in the non-peer-reviewed literature, but the effect was relatively small and varied widely across fields. In fact, the paper explicitly says that, when testing the various bias-related patterns,
[t]he ratio of studies concluding in favor vs. against a tested hypothesis increases, moving from the physical, to the biological and to the social sciences, suggesting that research fields with higher noise-to-signal ratio and lower methodological consensus might be more exposed to positive-outcome bias.
So, the magnitude of the bias is lower in the physical sciences, compared to the social sciences.
The paper also considered various risk factors (such as pressure to publish, or career level) and mostly found that there was no relationship between these and the presence of bias. The size of the team and the distance between collaborators were two that did have some influence, but most of the others had little effect. I found this quite interesting, because my own view was that part of the problem was the system in which the researchers operate, and this suggests that this plays little role. If anything, it seems as though most of the bias comes from researchers getting excited by what appear to be interesting results, rather than them seeing a way to advance their careers through publishing results that might be biased.
Overall, the paper concludes with
Our results should reassure scientists that the scientific enterprise is not in jeopardy, that our understanding of bias in science is improving and that efforts to improve scientific reliability are addressing the right priorities.
When it comes to dealing with bias, the paper made – in my view – some interesting points
However, our results also suggest that feasibility and costs of interventions to attenuate distortions in the literature might need to be discussed on a discipline and topic-specific basis and adapted to the specific conditions of individual fields. Besides a general recommendation to interpret with caution results of small, highly cited, and early studies, there may be no one-size fits-all solution that can rid science efficiently of even the most common forms of bias.
I think the latter is an important point. Science is a human endeavour and so will always be influenced by human flaws. Even though we should be aiming to reduce bias as much as possible, we can’t expect perfection and if the magnitude of bias is small (as this paper suggests) then we should be careful of introducing all sorts of possible solutions that might do little to actually solve the problem, are only relevant in certain circumstances, and might end up doing more harm than good. Being aware of where bias is most likely to exist (small studies, for example) is probably a good place to start.