Standing on the shoulders of giants” is what scientists say to acknowledge the work they are building on. It is a statement of humility and mostly accompanied by citations to the primary literature preceding the current work. In today’s competitive scientific enterprise, however, such humility appears completely misplaced. Instead, what many assume to be required is to convince everyone that you are the giant, the genius, the prodigy who is deserving of the research funds, the next position, tenure.

Facilitating this trend are journals who actively contribute to the existing institutional incentives for such hype by claiming to publish “the world’s best science” or “the very best in scientific research” and by simultaneously allowing only very few citations in their articles.

Thus, it should not come to anybody’s surprise that we find more and more articles in such journals , which claim that they found something unique, novel and counterintuitive that nobody has ever thought of before and that will require us to re-write the textbooks.

Point in case is the combo of the article entitled “Temporal structure of motor variability is dynamically regulated and predicts motor learning ability” with its accompanying news-type article (written by scientists). Both articles claim that the researchers have made the game-changing discovery that something long though to be a bug in our movement system is actually a spectacular feature. It is argued that this is such a huge surprise, because nobody in their right mind would have ever thought this possible. Their discovery will revolutionize the way we think about the most basic, fundamental properties of our very existence. Or something along those lines.

Only that probably most people in the field thought it should be obvious.

Skinner is largely credited with the analogy of operant conditioning and evolution. This analogy entails that reward and punishment act on behaviors like selection is acting on mutations in evolution: an animal behaves variably and encounters a reward after it initiated a particular action. This reward will make the action now more likely to occur in the future, just as selection will make certain alleles more frequent in a population. Already in 1981, Skinner called this “Selection by Consequences“. Skinner’s analogy sparked wide interest, e.g. an entire journal issue, which later appeared in book form. Clearly, the idea that reinforcement selects from a variation of different behaviors is not a new concept at all, but more than three decades old and rather prominent. This analogy cannot have escaped anybody working on any kind of operant learning, except they are seriously neglecting most relevant literature.

Elementary population genetics shows that the rate of evolution is proportional to the rate of mutation. This means that the more variants a population has to offer, the higher the rate of evolution will be. This, as well, is very basic and known since decades past.

It is thus no surprise that, for instance, Allen Neuringer has been studying the role of variability in operant conditioning for decades and also our own lab is studying the neurobiological mechanisms underlying behavioral variability. It’s a well-known and not overly complicated concept, so of course people have been studying various aspects of it for a long time. What was always assumed, but never explicitly tested, to my knowledge (but see update below!), is the relation between behavioral variability and learning rate. Does the analogy hold such that increased behavioral variability leads to increased operant learning rates, just like increased mutations rates lead to increased rates of evolutionary change?

Now, the authors of the research paper find that indeed, as assumed for so many decades, the rate of learning in operant conditioning is increased in subjects where the initial variability in the behavior is higher. This is, at least to me, a very exciting finding: finally someone puts this old assumption to the test and demonstrates that yes, Skinner got something right with his analogy. To me, this alone is worth attention and media promotion. Great work, standing on the shoulders of a giant, Skinner. This is how science should work, great job, Wu et al.! However, this was apparently not good enough for the authors of these two articles.

Instead of citing the wealth of earlier work (or at least Skinner’s original 1981 article), the authors claim that their results were surprising to them: “Surprisingly, we found that higher levels of task-relevant motor variability predicted faster learning”. The authors of the news-type article were similarly stunned: “These results provide intriguing evidence that some of the motor variability commonly attributed to unwanted noise is in fact exploration in motor command space.”

The question is of course, if this is ignorance on the part of the (seven in total) involved authors or a publication strategy, perceived to be superior to the “standing on the shoulder of giants” approach (and what a giant Skinner is!). It is, of course, moot to speculate about motives or competence without asking the authors directly. Perhaps there is even a third way besides incompetence or hype that I’m not aware of.

I don’t know any of the authors personally, so I decided to ask a mutual friend, John Krakauer, one of the leading experts in this field and whose papers the authors cite, what he thought about these articles. Specifically, I asked him what he thought about the citation of his article as a reference for the surprising nature of their finding:

Until now, motor variability has been viewed as an unwanted feature of movements, a noise that the brain is able to reduce only with practice8.

In his reply, he corrects the authors:

It is true that in our paper we were focused on variability as something that needs to be reduced when best performance is required. That said, in the discussion we explicitly mention that variability can also be used for exploration. As an example of this distinction, we mention the difference in variability between when songbirds are rehearsing their song versus when they must perform perfectly for their mate.

Apparently, at least one of the cited authors finds this citation not to be in order. With regard to the original article, John wrote: “Given that we posited that there is an operant component in error-based adaptation in 2011, I’m glad to see that their results are consistent with this view.”

It appears to me that the authors may know the relevant literature and selectively cite it in order to make their research results appear more earth-shattering and novel. If that were the case, it is up to anyone to speculate what the motivations behind this strategy were. In the best of all worlds, the authors do know their ways around the more modern literature of their specific subfields, but are unaware of the historical work in their field nor of the relevant work in related fields. In this case, the two papers are prime examples of the insularity of some fields of inquiry and demonstrate how more deep and interdisciplinary reading/training could improve the isolation of highly specialized fields in science. That being said, the authors being unaware of such a prominent concept at the heart of their method would constitute an indictment in its own right, at least in my books. Then again, one can never be sure to have read all the relevant literature and perhaps this can happen even to the best of us?

The news-type article doubling down on the hype reveals another aspect that has been worrying me for some time now. Given that the most important factor for a manuscript to be published in the most high-ranking journals is to get past the professional editor, the ensuing peer-review is likely to be biased in favor of publishing the paper. For one, the reviewers being experts in the field, they can cite the resulting paper and make their own research look hotter. Moreover, if the manuscript is published, it offers the chance of padding their resume with such fluff news-type articles and get (or keep) their own name out and associated with the big journals. Obviously, any publication in such high-ranking journals benefits not only the authors themselves, but also the field at large, creating a whole new set of incentives for peer-reviewers and authors of news-type articles.

UPDATE, February 10, 2014:

Allen Neuringer just sent me one of his papers in which he showed that training rats to be highly variable enabled them to learn a very complicated operant task, when animals trained to be only moderately variable failed to learn the task or did do only very slowly: Psychonomic Bulletin & Review 2002, 9 (2), 250-258. In contrast, Doolan and Bizo (2013) have tried to duplicate these findings in humans and failed. Thus, the principle behind the experiments of Wu et al. have already been tested and established in a mammalian model, just not in humans. Thus, it’s still good to see that humans are no exception in these processes, no doubt about that, but surely there is nothing revolutionary or even surprising about this work. On the contrary, based on work in rats, and from predictions many decades ago, the results presented by Wu et al. are precisely what we would have expected. Needless to say, the authors cite neither Skinner nor Neuringer.

(Visited 64 times, 44 visits today)
Share this:
Posted on  at 18:12