Today, finally, our manuscript on journal rank is accepted for publication at Frontiers in Human Neuroscience. One may wonder how a paper that reviews the empirical findings around journal rank ends up in a journal about human neuroscience. After all, our main conclusions after the literature survey can be summarized like this:

  • Journal rank (as measured by impact factor, IF) is so weakly correlated with the available metrics for utility/quality/impact that it is practically useless as an evaluation signal (even if some of these measures become statistically significant).
  • Less practically, but statistically, journal rank (as measured by IF) is slightly better than chance when filtering for novelty/importance
  • Less practically, but statistically, journal rank (as measured by IF) is worse than chance when filtering for scientific quality (i.e., inverse journal rank is better than chance and throwing dice is better than IF-based journal rank)

Colloquially speaking: if you prefer the hip but shoddy science, read GlamMagz, but if you value substance over style, read the regular journals.

The two most notable exceptions to these general conclusions are retractions and subjective journal rank. As detailed before, among the few really strong correlations with IF-based journal rank are the rate at which papers are retracted: the higher up in the rank, the more likely your paper is to be retracted. Worse still, most of these retractions are due to suspected or demonstrated scientific misconduct or outright fraud. The data say that the reasons for this strong correlation are twofold:

  1. The methodological quality of publications in high-ranking journals is either not better or worse than that of publications in lower ranking journals
  2. The prestige correlates with IF and hence the incentives for submitting sloppy/fraudulent work increase as do the incentives for error-detection

The second strong correlation with impact factor is subjective journal rank, i.e., how well IF captures the perceived, subjective prestige/quality/impact of a particular journal. Given the human potential for confirmation bias paired with the circularity of self-selection by sending only the one’s ‘best’ work to the high-ranking journals, this result is not hard to explain. Moreover, given the history of gaming by abusing the flagrant violations of transparency and basic scientific methodology of the impact factor, it is not inconceivable that the numbers published by Thomson Reuters each year match public perception so well, because also Thomson Reuters know the subjective journal ranking in the heads of their customers and that violations of these expectations could potentially harm a very lucrative business.

Now, why is all this published in a journal on human neuroscience? Well, certainly not for the psychology of confirmation bias and self-selection. We did of course submit our manuscript to the journals with the general readership. Especially, since the data in the literature were new to us and virtually every one of our colleagues that we asked. So here is what the editors of these journals had to say about the conclusions mentioned above and in our article. This is what Nature‘s Joanne Baker had to say:

we will decline to pursue [your manuscript] further as we feel we have aired many of these issues already in our pages recently

and this is what Science’s Brooks Hanson replied:

we feel that the scope and focus of your paper make it more appropriate for a more specialized journal

While Nature felt they had already written enough about how the high-ranking journals publish unreliable research, Science had the impression the topic of journal rank and how it threatens the entire scientific enterprise was not general enough for their readership. Since there are not that many general science journals with sections fitting a review like ours, we next went to PLoS Biology. There, at least, the responsible editor, Catriona MacCallum (whom I respect very much and who is exceedingly likable) sent our manuscript out for review. To our surprise, the reviewers essentially agreed with Nature, that there wasn’t anything new in our conclusions: everybody already knows that high-ranking journals publish unreliable science, e.g.:

“While I am in agreement with the insidious and detrimental influences on scientific publishing identified and discussed in this manuscript, most of what is presented has been covered thoroughly elsewhere.”

[…]

“The authors make sound points, and for doing so can rely on years of solid research that has investigated the pernicious role of journal rank and the impact factor in scholarly publishing.

Overall, I deem this a worthy and valid “perspective” that merits publication, but do want to make the following reservations.

The particular arguments that the authors make with respect to the deficiencies of the journal impact factor (irreproducible, negotiated, and unsound) have already been made extensively in the literature, in online forums, in bibliometric meetings, etc to the point that very little value is gained by the authors restating them in this perspective.

Most of the points dedicated to the retractions and decline effect, and the relation between journal rank and scientific unreliability are also extensively made in the literature that the authors cite.

In other words, very few new or novel insights are made in this particular perspective, other than to restate that which has already been debated extensively in the relevant literature.”

For the full reviews and the letter of the editor, scroll down on our Google Doc.

Thus, essentially, Nature and PLoS Biology officially agree with our assessment that one should read high-ranking journals with more than the regular dose of caution. Therefore, I hope it is now clear that in order to convince readers that the conclusions we draw from the literature are reliable, we had to publish in a journal with an impact factor of 2.339 – and don’t you skimp on any of those decimals!

(Visited 118 times, 71 visits today)
Share this:
Posted on  at 14:45