During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from April 26, 2010:

There are about 1.5 million scholarly articles published in all the sciences, spread over about 24,000 journals. Even if there were a single database or entry-point providing access to all the literature, nobody would be able to keep up with everything that is being published in their field of work any more. Desperately looking for some clue as to which publications to select for in-depth reading and which to ignore, scientists began to rank the journals according to how often the articles in these journals were cited. This ranking got started around the 1960s, when the number of journals started to proliferate. Fast-forward to today: What began as a last-ditch effort to handle an overwhelming flood of scientific information is now a full blown business. Journal ranking by citations is now done commercially by a multi-billion Dollar media corporation, Thomson Reuters. The journal rankings are sold to research institutions on a subscription basis ranging anywhere between approx. 30,000-300,000€ (US$40,000-400,000) annually.

With increased visibility for the high-ranking journals came an increase in submitted contributions. The higher ranking the journal, the more readers and contributors, so the more income for the publisher. And so the vicious cycle of scientific publishing evolved: more and more scientists want to publish in and read the high-ranking journals. Due to the high volume of submissions, the publishers of these journals are in a position to pick about 2-5% of the submitted articles for publication and reject the rest, increasing the prestige of these journals even more. Sometimes these rejections are accompanied by a recommendation to submit the work to one of the lower-rank journals of the same publisher. Clearly, something has to be exceptionally ‘good’ to make it into a high-ranking journal (or, as some claim, have the potential to increase the journal’s rank). After a few cycles, it became difficult to distinguish if a scientific finding was so ‘good’ that it made it into the high-ranking journals or if it had to be good because it was published there. Indeed, for some aspects of scientific life such as promotions, hiring, grant proposals or other sorts of evaluations, this question wasn’t even asked anymore. Publication quality became synonymous with journal rank. Today, where a scientist has published is often more important than what was published. In all human life, scarcity and branding are two powerful factors for determining value, as I’m sure any economist can tell a story or two about. Scientists are human beings and journal rank is but one example of just how prevalent the human factor is in the scientific enterprise. Today, the future of a professional scientist is all too often dominated more by the economics of scarcity and branding, rather than science.

What does all that have to do with potatoes in France?

After a discussion about potatoes over lunch the other day, I stumbled across this beautiful tale, published in 1956 in the American Potato Journal on how the potato arrived in France in the 18th century:

This endorsement of the potato and that of the various potato dishes served at the King’s table were enhanced by placing a uniformed guard on Parmentier’s potato plot. Parmentier’s considerate removal of the guard at night during the harvest season is reported to have furthered the success of the potato with the King’s subjects.

This story so reminded me of scientific publishing. Wikipedia puts the story a little more bluntly:

Parmentier therefore began a series of publicity stunts for which he remains notable today, hosting dinners at which potato dishes featured prominently and guests included luminaries such as Benjamin Franklin and Antoine Lavoisier, giving bouquets of potato blossoms to the King and Queen, and surrounding his potato patch at Sablons with armed guards to suggest valuable goods — then instructed them to accept any and all bribes from civilians and withdrawing them at night so the greedy crowd could “steal” the potatoes.

Now I wouldn’t know anything about bribes, but the part about creating artificial scarcity and a brand name to increase value for an ordinary object rang familiar.

In a recent ‘Opinion’ article in one of the journals at the very top of the rank, Nature, the author correctly points out that this system of journal rank has many flaws and should be replaced by a more scientific system for the metric evaluation of science. She specifically calls for social social scientists and economists to be involved in developing this new system, underscoring the points above. Indeed, it is remarkable that our current journal rank system is still in place. After all, not only does the author and many scientists agree, but also the originators of the journal rank system, the high-ranking journals themselves and even some evaluators all have long realized that using journal rank to evaluate individual researchers is both “unfair and unscholarly“. I have lamented this absurd state of affairs plenty of times right here and elsewhere.

However, artificial scarcity and brand name have, by now, developed such a powerful dynamic, fueled by billions in taxpayer money and a rich history of great scientific traditions, that it seems unstoppable, even if all participating parties agree that putting an end to it would be better for science.

It is with these powerful dynamics (and some analogous evolutionary dynamics) in mind that I posted an off-hand comment to the ‘Opinion’ article mentioned above. The comment stated that any, even the most complex and scientifically tested system will eventually succumb to social dynamics adapting the scientific community to the system and maximizing the individual participant’s benefit while minimizing their costs. The only system that would be immune to such dynamics is one where the rules change more quickly than the social dynamics can follow:

Wouldn’t it be nice if metrics weren’t needed? However, despite all the justified objections tobibliometrics, unless we do something drastic to reduce research output to an amount manageable in the traditional way, we will not have any other choice than to use them.However, as the commenters before already mentioned, no matter how complex and sophisticated, any system is liable to gaming. Therefore, even in an ideal world where we had the most comprehensive and advanced system for reputation building and automated assessment of the huge scientific enterprise in all its diversity, wouldn’t the evolutionary dynamics engaged by the selection pressures within such systems demand that we keep randomly shuffling the weights and rules of these future metrics faster than the population can adapt?

This comment was published as a ‘Correspondence’ piece in the printed version of Nature. Coincidentally, the current LaborJournal contains a letter from me, which states pretty much the same thing, with some additional information.


Hougas, R. (1956). Foreign potatoes, their introduction and importance American Potato Journal, 33 (6), 190-198 DOI: 10.1007/BF02879217
Lane, J. (2010). Let’s make science metrics more scientific Nature, 464 (7288), 488-489 DOI: 10.1038/464488a

(Visited 71 times, 38 visits today)
Posted on  at 18:21