Arguably, there is little that could be more decisive for the career of a scientist than publishing a paper in one of the most high-profile journals such as Nature or Science. After all, in this competitive and highly specialized days, where a scientist is published all too often is more important than what they have published. Thus, the journal hierarchy that determines who will stay in science and who will have to leave is among the most critical pieces of infrastructure for the scientific community – after all, what could be more important than caring for the next generation of scientists? Consequently, it receives quite thorough scrutiny from scientists (bibliometricians, scientometricians) and there is a large variety of journals which specialize in such investigations and studies. Thanks to the work of these colleagues, we now have quite a large body of empirical work surrounding journal rank and its consequences for the scientific community. This evidence points to Nature, Science and other high profile journals, rather than publishing the ‘best’ science, actually publishing the methodologically most unreliable science. One of several unintended consequences of this flawed journal hierarchy is that the highest ranking journals have a higher rate of retraction than the lower ranking journals. The data are also quite clear that this disproportional rate of retractions is largely (but not exclusively) due to the flawed methodology of the paper published there, and only in small (but significant) portion due to the increased attention high-profile journals are attracting.

Perhaps not surprisingly, both Nature and Science actively ignore and disregard the available evidence in favor of less damning speculations and anecdotes. Given that both journals were made aware of the evidence as early as 2012, one could be forgiven for now starting to speculate that the journals are attempting to safeguard their unjustified status by suppressing dissemination of the data. First, after rejecting the publication of the self-incriminating data, Science Magazine published a flawed attempt to discredit lower ranking journals, concluding, implicitly, that one better rely more on the higher ranking, established journals. Then, barely a fortnight later, Nature Magazine (which also rejected publication of actual data on the matter) followed suit and publishes an opinion piece on how scientists feel about journal rank. Today, completing the triad, Nature publishes something like a storify from different tweets, citing the fact that higher ranking journals have higher retraction rates and speculating if the higher rates may stem from increased attention.

Needless to say, we have maybe not entirely conclusive, but pretty decent empirical data showing that there are several factors contributing to this strong correlation and that increased attention to higher ranking journals is likely to be one of these factors, but probably a minor, if not the least important one. Instead, the data suggest that the methodological flaws of the papers in high-profile journals, in conjunction with the incentives to publish eye-catching results are much stronger factors driving the correlation. The consistent disregard for the empirical data suggesting that the current status of high-profile journals is entirely unjustified, could raise the suspicion that this last news piece in Nature Magazine may be part of a fledgling publisher strategy to divert attention away from the data in order to protect the status quo.

However, not only due to Hanlon’s Razor, one has to consider the more likely alternative that none of the various authors or editors behind the three articles actually is aware of the existing data. For one, none of these authors/editors was involved with handling the manuscript in which we reviewed the data on journal rank. Second, the first article, Bohannon’s sting, was so obviously flawed, one wonders if the author is familiar with any empirical work at all, let alone the pertinent literature. Third, the evidence points to the editorial process at high-ranking journals selecting flawed studies for publication – it appears that only very few, if any, of the editors at these journals are any good at what they do. Given these three reasons, one can only conclude that this string of three entirely misleading articles can only be due to “stupidity” and not to “malice”, to use Hanlon’s words.


UPDATE: I had no idea how wrong I was, until I saw this tweet from Jon Tennant:

@brembs @CorieLok I see your comment. I did push your article when asked for comments, but this didn’t make it into the article

In other words, the author of the article (which I assume must be Cori Lok) did know about the data we have available and nevertheless pushed the anecdote. Given this new information, it may be time to reject Hanlon’s razor and exclude “stupidity” in favor of “malice”?

(Visited 47 times, 46 visits today)
Share this:
Posted on  at 10:21