Scientists are used to vested interests disputing scientific claims. Tobacco corporations have tried to discredit the science about lung cancer and smoking, creationists keep raising always the same, long-debunked objections against evolution, climate-deniers claim the earth is cooling, anti-vaxxers believe the MMR vaccine causes autism and homeopaths delude themselves that a little drop of nothing has magical healing powers. No amount of evidence will convince such “unpersuadables”.
What receives less attention, though, is what may be a more severe problem for science, namely the dismissal of science by scientists – unpersuadable scientists.
Documenting this phenomenon is difficult, because the dismissal of science is only rarely uttered in writing or in public. One would tend to hope that this is an indication that such behavior may be rare. I am aware of only two instances. One is the now infamous blog post by Brian Wansink, and another are the public statements by decision-makers at a German conference documented elsewhere. Recently, I have witnessed a third instance.
At a dinner table with about ten participants, all academics, the discussion entered the topic of ‘quality’ academic journals and journal rank. When I referenced the data showing that higher journal rank tends to be associated with lower experimental reliability, several individuals mentioned that they find these results hard to believe. When I asked about data to the contrary which may be the basis for their hesitation, the participants only emphasize they had no other data, just their “intuition” and “gut feeling”. When I asked what they do when their own experiments yield data that go against their intuition or gut feeling, one professor exclaimed: “I tell the postdoc to do the experiment again and a third time if need be!”. When I expressed my shock at such practices, the two most senior professors, one of whom once a university president and both medical faculty, emphatically accused me of being dogmatic for giving primacy to scientific evidence, rather than intuitions or gut feelings.
Recent evidence points towards published scientific results, at least in some fields, being far less reliable than one would expect. If it were common that the reliability of science hinged on postdocs standing up for their data against the gut feeling of the person who will write their letters of recommendation and/or extend their contract, we may have to brace ourselves for more bad news coming from the reproducibility projects being carried out right now.
Wansink was trained in marketing and had no idea about science. His lack of training and incompetence in science may be an excuse for his behavior. These two individuals, however, have graduated from medical school, have decades of research and teaching experience behind their belt and one of them even complained that “most of the authors of the manuscripts I review or edit have no clue about statistics”. Surely, these individuals recognize questionable research practices when they see them? Nevertheless, similar to Wansink, they wouldn’t take a “failed” experiment for an answer and similar to the decision makers at the meeting in Germany in 2015, they would put their experience before peer-reviewed evidence.
If scientific training doesn’t immunize individuals against unscientific thinking and questionable research practices, how can we select a future cadre of decision-makers in science that do not put themselves before science and that will implement evidence-based policies, instead of power-brokering their way to new positions? There is a recent article on “intellectual humility” – maybe this is the way to go?
P.S.: There are more instances of scientists publicly dismissing evidence to the contrary of their own belief: Zimbardo, Bargh spring to mind and I’ll look for others.
I think it is important to consider what the professors meant by “do the experiment again.” For example, imagine we enter an experiment with some (perhaps theoretical) reason to believe that proposition A is true. The experiment gives us reason to believe that A is not true. The proposition may be false, or the results of the experiment may be spurious. Repeating the experiment is a reasonable way to improve our understanding of the proposition. However, we must not discard the first set of results. If the results of the initial experiment reflect the true pattern, then the repeat should strengthen the conclusion. If the results of the initial experiment do not reflect the true pattern, they nonetheless tell us something about the effect size or variability of the pattern.
One big challenge that remains is finding a way to make the results of all of our experiments accessible and findable.
I agree with you: there is nothing wrong in principle to just do an experiment again, especially if all the data are accessible and it is transparent what was done and how.
I see two mainly practical issues and these I left implicit in the post, so I’m glad you brought it up:
1. All too often (for various reasons), only the “A is true” result is reported, especially if this is what the PI thought was the case.
2. All too often, not the same scrutiny is applied to all hypotheses, due to confirmation bias: what’s the point of re-testing an experiment that says that A is true, if one already thought A must be true to begin with? In the present of sampling error, this bias in re-testing will inevitably lead to false results, especially in combination with point one above.
If these two points could be eliminated (and there are ways to do this, or at least mitigate them), there would indeed not be that much of a problem left.
Thanks for raising this issue!
Oh, forgot the link to the post where there is a solution to the challenge of how to make all experiments accessible:
https://bjoern.brembs.net/2015/04/what-should-a-modern-scientific-infrastructure-look-like/
A modern infrastructure would do that for us.