Retractions of scholarly articles are a rare event, affecting only about 0.02-0.04% of articles in total (but yearly rates are going up dramatically). This means that data about retractions are not even close to being representative of the scholarly literature at large. In particular, when the non-retracted literature contains anything from 40% to over 80% of unreliable work, even today’s retraction rates of around 0.2% or so seem totally negligible, in the grand scheme of things. After all, what is worse: a literature where 0.04% of articles are clearly marked as unreliable, or one where 40-80% of articles are equally unreliable, but completely unmarked?
Then again, one the one hand, some retractions can be high-profile and draw public attention to a science scandal, and on the other hand, despite their tiny numbers, retraction data have been analyzed and it was found that, e.g., articles from higher ranking journals and those with male corresponding authors tend to be marred by higher than average retraction rates.
It is understandable why such findings catch attention, despite the statistically negligible retraction rates. In addition to the tiny numbers, another aspect that makes retraction data so useless is that it is exceedingly difficult to tease apart if high retraction rates are due to increased scrutiny (i.e., detection) or overall lower quality. It is no surprise, therefore, that both high-ranking journals and male authors defended themselves by exclaiming (without evidence) that their publications were scrutinized more, hence the increased retractions.
Mirroring previous discussions where the evidence suggested that increased retraction rates at more prestigious journals was likely not due to more scrutiny, but due to lower quality work being published there, new evidence also points to the science published by women being held to a higher standard and hence receiving fewer retractions for quality reasons.
Two arguments have recently been published, supporting the notion that female authors publish higher quality work than male authors. The first is more like an anecdote from a single journal: Nature magazine finds that they reject more work from female authors, suggesting they hold them to a higher standard than men. This year, in a proper study of the biomedical literature, it was found that:
By analyzing all articles indexed in the PubMed database (>36.5 million articles published in >36,000 biomedical and life sciences journals), we show that the median amount of time spent under review is 7.4%–14.6% longer for female-authored articles than for male-authored articles.
So now we have two areas where it seems that increased retraction rates are likely not due to increased scrutiny of published articles, but, rather, due to lower quality publications. I would still not bet that these data will help sway any of the “increased scrutiny” proponents.














