bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 169 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 88 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 196 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 502 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 747 downloads 0.00 KB
Download
Aug13

Neither gold, nor green, nor hybrid are sustainable open access models

In: blogarchives • Tags: journal rank, open access, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from December 28, 2012:

If you trust empirical evidence, science is currently heading for a cliff that makes dropping off the fiscal cliff look like a small step in comparison. As we detail in our review article currently under revision, retractions of scientific articles are increasing at an exponential rate, with the majority of retractions being caused by misconduct and fraud (but also the error-rate is increasing). The evidence suggests that journal rank (the hierarchy among the 31,000 scientific journals) contributes a pernicious incentive: because funds are tight and science is increasingly under pressure to justify its expenditure, people are rewarded for publishing in high-ranking journals. However, there is no empirical evidence that science published in these journals is any different from scientific discoveries published in other journals. If anything, high-ranking journals publish a much larger fraction of the fraudulent work than lower ranking journals and also a larger fraction of the unintentionally erroneous work. In other words, journal rank is like homeopathy, astrology or dowsing: one may have the subjective impression that there is something to it, but any such effects disappear under scientific scrutiny.

As journal rank has only been used as an instructor for the hire-and-fire policy and institutions world-wide for a few decades, the data also project some potentially catastrophic consequences of journal rank: science has been hiring those candidates who are especially good at marketing their science to top journals, but maybe not equally good at the science itself. Conversely, excellent scientists were fired who did not reach institutional requirements for marketing their research. If this is really what has been taking place, it has now been going on just long enough by now to replace an entire generation of scientists with researchers who are particularly good at marketing, providing one potential explanation of why the fraud and retraction rate is exploding just at this particular point in time. However, until a few years ago, this has been a trend that has only been observed by a few bibliometricians.

At the same time, a much more obvious trend has been receiving a lot of attention: the rising costs of acess to the scholarly literature. To counter this trend, three different publishing models have emerged, which only address the access problem, but not the parallel, and potentially underlying problem of journal rank. These models aim to provide unrestricted, open access to publicly funded research results either by charging the authors once for each article (gold), or by mandating them to place a copy not of the final PDF, but of the version approved by the referees (i.e., the version before the publishers format it) in institional repositories (green), or by providing an option for authors to make heir article accessible in a subscription journal by an additional article fee, i.e., if the authors pay the fee, their article becomes openly accessible, if not, it stays behind a paywall (hybrid). Importantly, the three models which are currently aimed at publishing reform are not sustainable in the long term:

  1. Gold Open Access publishing without abolishment of journal rank (or heavy regulation) will lead to a luxury segment in the market, as evidenced not only of suggested author processing charges nearing 40,000€ (US$~50,000) for the highest-ranking journals, but also by the correlation of existing author processing charges with journal rank. Such a luxury segment would entail that only the most affluent institutions or author would be able to afford publishing their work in high-ranking journals, anathema to the meritocracy science ought to be. Hence, universal, unregulated Gold Open Access is one of the few situations I can imagine that would potentially be even worse than the status quo.
  2. Green Open Access publishing entails twice the work on the part of the authors and needs to be mandated and enforced to be effective, thus necessitating an additional layer of bureaucracy, on top of the already unsustainable status quo.
  3. Hybrid Open Access publishing inflates pricing and allows publishers to not only double-dip into the public purse, but to triple-dip. Thus, Hybrid Open Access publishing is probably the most expensive version overall for the public purse.

Thus, what we have now is a status quo that is a potential threat to the entire scientific endeavor both from an access perspective and from a content perspective, and the three models being pushed as potential solutions are not sustainable, either. The need for drastic reform has never been more pressing.

Like this:

Like Loading...
Posted on August 13, 2013 at 19:49 2 Comments
Aug12

Flashback: The neurobiology of operant conditioning

In: blogarchives • Tags: Drosophila, FoxP, neurogenetics, operant, PKC, rutabaga, self-learning

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from April 28, 2011:

It turns out, operant conditioning is very different from other forms of learning, all the way from the genes up. When I started my research on operant conditioning in 1995, I did so with the opposite hypothesis, namely that the underlying mechanism of all learning processes was always synaptic plasticity with the well-known molecular pathway: Ca++, cAMP, PKA, CamK, CREB and so on. After all, wasn’t that pathway conserved all the way from flies, snails and mice to humans? By the time I finished by PhD in 2000, Eric Kandel had received the Nobel prize for exactly these learning mechanisms – he wouldn’t have gotten the prize if the pathways had not been so conserved. In principle, changing the weight of the synapses is all you need to do to store whatever information you want. There is no a priori need to have several different mechanisms by which neural networks are modified.

A few years ago, I started getting data from fruit flies ( Drosophila) that were exactly the opposite of what my initial hypotheis was: the genes required for standard synaptic plasticity (such as the rutabaga adenylyl cyclase) were not required in our form of operant conditioning. In contrast, a gene which had previously been shown not to be involved in classical conditioning, protein kinase C (PKC) turned out to be crucial for operant conditioning. What made the whole story even more intriguing was that the same evidence started to show up in the lab where I did my postdoc, using the marine snail Aplysia as a model system: PKC was required, but the rut-cyclase was not.

Why had nobody discovered this dichotomy between the learning mechanisms before us? It turned out that the crucial experimental advance was to prevent the animals from learning about anything else besides their behavior. As soon as we let the animals learn about any external cues in addition to their behavior, the results go back to the expected canonical pathways being required and PKC not. Obviously, nobody had been able to completely isolate operant conditioning to the extent that was required. Because all our experiments were operant in nature, but only differed in whether or not the animals were able to learn about environmental cues or not, we called the PKC-dependent learning mechanism operant self-learning and the other, well-described form, operant world-learning.

How far is this new form of plasticity (in Aplysia it is a form of ‘intrinsic plasticity’ modifying the entire neuron and not just the synapse; in Drosophila we don’t know) conserved? We are currently in the process of writing up our experiments on the ‘language gene’ FoxP2. Drosophila has an orthologue of this gene and if we mutate it (or knock it down with RNAi), we find that it is required for operant self-learning, but not for operant world-learning, paralleling the results we had for PKC. This means we now have a new learning mechanism at hand that is clearly distinct from the well-known synaptic plasticity pathway, but is equally conserved among invertebrates and vertebrates. These results suggest an ancient evolutionary origin for operant self-learning, possibly at the root of the bilaterian branch, and a complementary role to world-learning.

I have summarized these results in an invited review on occasion of the 2010 conference of SQAB in the journal “Behavioural Processes”. Unfortunately, there are a few mistakes in the copy available from the publisher. Some spaces are missing between words and the references Brembs 2009a and Brembs 2009b are mixed up. I’ve notified the publisher, but they said it was too late to fix. I’ve now fixed the HTML version of my local copy, but I can’t fix my PDF copy as they use a font that is not freely avaliable. So if anybody knows how I can fix my own PDF copy, please let me know!


Brembs, B. (2011). Spontaneous decisions and operant conditioning in fruit flies Behavioural Processes DOI: 10.1016/j.beproc.2011.02.005

Like this:

Like Loading...
Posted on August 12, 2013 at 21:09 Comments Off on Flashback: The neurobiology of operant conditioning
Aug11

SummerScienceVideo: science is real

In: blogarchives • Tags: fun, science, video

As part of my scheduled re-posts during the summer break, I’ll also post some of the science videos from the archives. I originally posted this one on Agust 31, 2011:

A nice song with a really cool little video:
They Might Be Giants - Science is Real (official TMBG video)
They Might Be Giants - Science is Real (official TMBG video)

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

Like this:

Like Loading...
Posted on August 11, 2013 at 13:24 Comments Off on SummerScienceVideo: science is real
Aug08

12 year anniversary of angry letter to scientific journal editor

In: science politics • Tags: publishing

This year marks the 12th anniversary of the publication of this legendary letter to the editor in the Journal of systems and Software:

A letter from the frustrated author of a journal paper

R. L. Glass

Computing Trends, 1416 Sare Road, Bloomington, IN 47401, USA

Available online 28 September 2000.

Editor’s Note: It seems appropriate, in this issue of JSS containing the findings of our annual Top Scholars/Institutions study, to pay tribute to the persistent authors who make a journal like this, and a study like that, possible. In their honor, we dedicate the following humorous, anonymously-authored, letter!

—

Dear Sir, Madame, or Other:

Enclosed is our latest version of Ms. #1996-02-22-RRRRR, that is the re-re-re-revised revision of our paper. Choke on it. We have again rewritten the entire manuscript from start to finish. We even changed the g-d-running head! Hopefully, we have suffered enough now to satisfy even you and the bloodthirsty reviewers.

I shall skip the usual point-by-point description of every single change we made in response to the critiques. After all, it is fairly clear that your anonymous reviewers are less interested in the details of scientific procedure than in working out their personality problems and sexual frustrations by seeking some kind of demented glee in the sadistic and arbitrary exercise of tyrannical power over hapless authors like ourselves who happen to fall into their clutches. We do understand that, in view of the misanthropic psychopaths you have on your editorial board, you need to keep sending them papers, for if they were not reviewing manuscripts they would probably be out mugging little old ladies or clubbing baby seals to death. Still, from this batch of reviewers, C was clearly the most hostile, and we request that you not ask him to review this revision. Indeed, we have mailed letter bombs to four or five people we suspected of being reviewer C, so if you send the manuscript back to them, the review process could be unduly delayed.

Some of the reviewers’ comments we could not do anything about. For example, if (as C suggested) several of my recent ancestors were indeed drawn from other species, it is too late to change that. Other suggestions were implemented, however, and the paper has been improved and benefited. Plus, you suggested that we shorten the manuscript by five pages, and we were able to accomplish this very effectively by altering the margins and printing the paper in a different font with a smaller typeface. We agree with you that the paper is much better this way.

One perplexing problem was dealing with suggestions 13–28 by reviewer B. As you may recall (that is, if you even bother reading the reviews before sending your decision letter), that reviewer listed 16 works that he/she felt we should cite in this paper. These were on a variety of different topics, none of which had any relevance to our work that we could see. Indeed, one was an essay on the Spanish–American war from a high school literary magazine. The only common thread was that all 16 were by the same author, presumably someone whom reviewer B greatly admires and feels should be more widely cited. To handle this, we have modified the Introduction and added, after the review of the relevant literature, a subsection entitled “Review of Irrelevant Literature” that discusses these articles and also duly addresses some of the more asinine suggestions from other reviewers.

We hope you will be pleased with this revision and will finally recognize how urgently deserving of publication this work is. If not, then you are an unscrupulous, depraved monster with no shred of human decency. You ought to be in a cage. May whatever heritage you come from be the butt of the next round of ethnic jokes. If you do accept it, however, we wish to thank you for your patience and wisdom throughout this process, and to express our appreciation for your scholarly insights. To repay you, we would be happy to review some manuscripts for you; please send us the next manuscript that any of these reviewers submits to this journal.

Assuming you accept this paper, we would also like to add a footnote acknowledging your help with this manuscript and to point out that we liked the paper much better the way we originally submitted it, but you held the editorial shotgun to our heads and forced us to chop, reshuffle, hedge, expand, shorten, and in general convert a meaty paper into stir-fried vegetables. We could not – or would not – have done it without your input.

—

Journal of Systems and Software, Volume 54, Issue 1, 30 September 2000, Page 1

Nothing really has changed in a dozen years, it seems devilmad.png

Like this:

Like Loading...
Posted on August 8, 2013 at 16:09 Comments Off on 12 year anniversary of angry letter to scientific journal editor
Aug06

Flashback: What can the spine teach us about learning?

In: blogarchives • Tags: conditioning, operant, response, spine, spontaneity, wolpaw

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from April 29, 2011:

Only very few laboratories in the world perform operant conditioning of spinal reflexes. In fact, a quick PubMed search reveals there is only a single lab which has published in this field in the last decade, the lab of Jonathan Wolpaw. Jonathan’s review “What Can the Spinal Cord Teach Us about Learning and Memory?” in The Neuroscientist shows what neuroscience is missing out on by not investing more in this fascinating field.Operant conditioning of spinal reflexes is probably the most controlled operant conditioning situation imaginable: reward the animal when it responds with a reflex magnitude above or below a certain threshold, respectively. This is done by triggering the reflex with a cuff electrode around the nerve and then measuring the amplitude of the reflex with electromyography (EMG):

hreflex.png
The electrical stimulation via the cuff excites the muscle directly (the M signal in the EMG in the upper left corner) and, with a delay, indirectly via the H-reflex.

Below is an image of what that setup looks like when it’s implanted in a rat:
hreflex_rat.png
The rat is running around in its cage and receives a food reward whenever the H-reflex reaches the required amplitude.

Given that the textbook reflex (i.e., the spinal stretch reflex shown in the first image above) is monosynaptic, one would expect just this synapse to be modified after operant conditioning. However, this synaptic plasticity only contributes to one form of this learning, namely up-conditioning. Up-conditioning refers to the experiment where increased reflex amplitude was rewarded. In these experiments, the synaptic input from the primary 1a afferents (blue, in the first figure above) is increased, making the reflex amplitude larger. In down-conditioning, however, the synaptic input to the motor neuron is not altered, but the motor neuron itself (green) reveals an increased firing threshold and reduced postsynaptic potentials, making the motor neuron less likely to fire (and hence reflex amplitude smaller). In addition to these different forms of plasticity, correlates of the memory can be found throughout the spinal cord and even in the cortex. Some of these correlates appear to be compensatory modifications to other reflexes, preventing the increased amplitude of the conditioned reflex from making the animal limp. Again others are required for the maintenance of the memory, but do not seem to directly contribute to the memory trace itself. There are many more examples in the paper.

Taken together, the results presented in this review open up more questions than they answer and demonstrate that this is a promising research field, with still plenty of low-hanging fruit and a large variety of basic neuroscientific lessons which are hard, if not impossible to learn from other models.

What are you waiting for? Go and study operant conditioning of these reflexes already! smallgrin.png


Wolpaw, J. (2010). What Can the Spinal Cord Teach Us about Learning and Memory? The Neuroscientist, 16 (5), 532-549 DOI: 10.1177/1073858410368314

Like this:

Like Loading...
Posted on August 6, 2013 at 18:01 Comments Off on Flashback: What can the spine teach us about learning?
Aug05

SummerScienceVideo: fruit fly research

In: blogarchives • Tags: behavior, dickinson, Drosophila, neuroscience, TED, video

As part of my scheduled re-posts during the summer break, I’ll also post some of the science videos from the archives. I originally posted these two on February 24, 2013:

The first one is a TED talk by Michael Dickinson on how flies fly:

Michael Dickinson: How a fly flies
Michael Dickinson: How a fly flies

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

and the second one is on recording from fly visual neurons during flight and non-flight. This one was done in CalTech where Michael Dickinson used to work:

Brain recording from flying fruit fly
Brain recording from flying fruit fly

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

Like this:

Like Loading...
Posted on August 5, 2013 at 09:17 Comments Off on SummerScienceVideo: fruit fly research
Aug02

In which potatoes in France are like high-ranking journals in science

In: blogarchives • Tags: impact factor, journal rank, open access, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from April 26, 2010:

There are about 1.5 million scholarly articles published in all the sciences, spread over about 24,000 journals. Even if there were a single database or entry-point providing access to all the literature, nobody would be able to keep up with everything that is being published in their field of work any more. Desperately looking for some clue as to which publications to select for in-depth reading and which to ignore, scientists began to rank the journals according to how often the articles in these journals were cited. This ranking got started around the 1960s, when the number of journals started to proliferate. Fast-forward to today: What began as a last-ditch effort to handle an overwhelming flood of scientific information is now a full blown business. Journal ranking by citations is now done commercially by a multi-billion Dollar media corporation, Thomson Reuters. The journal rankings are sold to research institutions on a subscription basis ranging anywhere between approx. 30,000-300,000€ (US$40,000-400,000) annually.

With increased visibility for the high-ranking journals came an increase in submitted contributions. The higher ranking the journal, the more readers and contributors, so the more income for the publisher. And so the vicious cycle of scientific publishing evolved: more and more scientists want to publish in and read the high-ranking journals. Due to the high volume of submissions, the publishers of these journals are in a position to pick about 2-5% of the submitted articles for publication and reject the rest, increasing the prestige of these journals even more. Sometimes these rejections are accompanied by a recommendation to submit the work to one of the lower-rank journals of the same publisher. Clearly, something has to be exceptionally ‘good’ to make it into a high-ranking journal (or, as some claim, have the potential to increase the journal’s rank). After a few cycles, it became difficult to distinguish if a scientific finding was so ‘good’ that it made it into the high-ranking journals or if it had to be good because it was published there. Indeed, for some aspects of scientific life such as promotions, hiring, grant proposals or other sorts of evaluations, this question wasn’t even asked anymore. Publication quality became synonymous with journal rank. Today, where a scientist has published is often more important than what was published. In all human life, scarcity and branding are two powerful factors for determining value, as I’m sure any economist can tell a story or two about. Scientists are human beings and journal rank is but one example of just how prevalent the human factor is in the scientific enterprise. Today, the future of a professional scientist is all too often dominated more by the economics of scarcity and branding, rather than science.

What does all that have to do with potatoes in France?

After a discussion about potatoes over lunch the other day, I stumbled across this beautiful tale, published in 1956 in the American Potato Journal on how the potato arrived in France in the 18th century:

This endorsement of the potato and that of the various potato dishes served at the King’s table were enhanced by placing a uniformed guard on Parmentier’s potato plot. Parmentier’s considerate removal of the guard at night during the harvest season is reported to have furthered the success of the potato with the King’s subjects.

This story so reminded me of scientific publishing. Wikipedia puts the story a little more bluntly:

Parmentier therefore began a series of publicity stunts for which he remains notable today, hosting dinners at which potato dishes featured prominently and guests included luminaries such as Benjamin Franklin and Antoine Lavoisier, giving bouquets of potato blossoms to the King and Queen, and surrounding his potato patch at Sablons with armed guards to suggest valuable goods — then instructed them to accept any and all bribes from civilians and withdrawing them at night so the greedy crowd could “steal” the potatoes.

Now I wouldn’t know anything about bribes, but the part about creating artificial scarcity and a brand name to increase value for an ordinary object rang familiar.

In a recent ‘Opinion’ article in one of the journals at the very top of the rank, Nature, the author correctly points out that this system of journal rank has many flaws and should be replaced by a more scientific system for the metric evaluation of science. She specifically calls for social social scientists and economists to be involved in developing this new system, underscoring the points above. Indeed, it is remarkable that our current journal rank system is still in place. After all, not only does the author and many scientists agree, but also the originators of the journal rank system, the high-ranking journals themselves and even some evaluators all have long realized that using journal rank to evaluate individual researchers is both “unfair and unscholarly“. I have lamented this absurd state of affairs plenty of times right here and elsewhere.

However, artificial scarcity and brand name have, by now, developed such a powerful dynamic, fueled by billions in taxpayer money and a rich history of great scientific traditions, that it seems unstoppable, even if all participating parties agree that putting an end to it would be better for science.

It is with these powerful dynamics (and some analogous evolutionary dynamics) in mind that I posted an off-hand comment to the ‘Opinion’ article mentioned above. The comment stated that any, even the most complex and scientifically tested system will eventually succumb to social dynamics adapting the scientific community to the system and maximizing the individual participant’s benefit while minimizing their costs. The only system that would be immune to such dynamics is one where the rules change more quickly than the social dynamics can follow:

Wouldn’t it be nice if metrics weren’t needed? However, despite all the justified objections tobibliometrics, unless we do something drastic to reduce research output to an amount manageable in the traditional way, we will not have any other choice than to use them.However, as the commenters before already mentioned, no matter how complex and sophisticated, any system is liable to gaming. Therefore, even in an ideal world where we had the most comprehensive and advanced system for reputation building and automated assessment of the huge scientific enterprise in all its diversity, wouldn’t the evolutionary dynamics engaged by the selection pressures within such systems demand that we keep randomly shuffling the weights and rules of these future metrics faster than the population can adapt?

This comment was published as a ‘Correspondence’ piece in the printed version of Nature. Coincidentally, the current LaborJournal contains a letter from me, which states pretty much the same thing, with some additional information.


Hougas, R. (1956). Foreign potatoes, their introduction and importance American Potato Journal, 33 (6), 190-198 DOI: 10.1007/BF02879217
Lane, J. (2010). Let’s make science metrics more scientific Nature, 464 (7288), 488-489 DOI: 10.1038/464488a

Like this:

Like Loading...
Posted on August 2, 2013 at 18:21 Comments Off on In which potatoes in France are like high-ranking journals in science
Aug01

Flashback: ‘stimulus-response’ concept based on artifacts?

In: blogarchives • Tags: action, brain, response, spontaneity

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from June 13, 2011:

Most neuroscientists would subscribe to the sensorimotor hypothesis, according to which brains mainly evaluate sensory input to compute motor output. For instance, Mike Mauk wrote now over ten years ago: “brain function is ultimately best understood in terms of input/output transformations and how they are produced” [1]. Tony Dickinson recognized already in 1985 that “Indeed, so pervasive is the basic assumption of this model that it is common to refer to any behaviour as a ‘response’ and thus by implication […] assume that there must be an eliciting stimulus.” [2]. Textbooks to this day mostly begin with a graph showing sensory input entering the brain (usually via the eyes) and then motor-output leaving it.However, more and more information is now accumulating that to the extent that these stimulus-response relationships actually exist, they may be the exception, rather then the rule of what brains are doing when they’re not in a laboratory experiment. Perhaps most recently, in the area of human brain research this change in perception has also begun. Marcus Raichle’s “Two views of brain function” [3] provides plenty of evidence against the sensorimotor hypothesis. There are many more examples of this kind of evidence. For me personally, the most eye-opening one was this famous video by Ken Catania:
Tentacled snake in action
Tentacled snake in action

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

If all behavior were always organized according to stimulus-response schemes such as the C-start response in fish, animals would be extremely vulnerable not only to predators (or prey), but of course also to competitors. Evolution is a competitive business: if you’re too predictable, you lose.

In our labs, reproducibility is key to success. This is precisely the reason why escape responses are so well-studied: these are the exceptions where animals have specialized in speed and sacrificed unpredictability in an evolutionary trade-off. I would hypothesize that no species would survive for long if all other behaviors sacrificed unpredictability in this way,

More likely, brains need to balance input-output processing with output-input processing, with the latter probably being both the more prevalent and the ancestral form of behavioral control. It is this delicate balance that brains must constantly strike to survive, procreate and be successful. If we want to understand what the brains we study are really doing when they are not in the lab, we need to take a step back and design more experiments that don’t require a response, but an action. This process has already started, but the realization that we have been heading down the stimulus-response direction for too long has not widely set in yet, IMHO.

The stimulus-response approach has been hugely successful for the most derived and simplified forms of behavior – and we’re still far from done with the task. Now comes the vastly more complex task of understanding how brains decide which action to take next, when there is no simple stimulus providing unambiguous information. Many labs have already started to embark on this task. In our lab, we study animals in the complete absence of discrete sensory stimulation, in order to find out how brains create “something out of nothing”. Which actions are you studying?

This post was originally written for the launch of the new social network for neuroscientists, NeurOnline (@SfN).


[1] Mauk, M. (2000). The potential effectiveness of simulations versus phenomenological models Nature Neuroscience, 3 (7), 649-651 DOI: 10.1038/76606
[2] Dickinson, A. (1985). Actions and Habits: The Development of Behavioural Autonomy Philosophical Transactions of the Royal Society B: Biological Sciences, 308 (1135), 67-78 DOI: 10.1098/rstb.1985.0010
[3] Raichle, M. (2010). Two views of brain function Trends in Cognitive Sciences, 14 (4), 180-190 DOI: 10.1016/j.tics.2010.01.008

Like this:

Like Loading...
Posted on August 1, 2013 at 15:50 Comments Off on Flashback: ‘stimulus-response’ concept based on artifacts?
Jul31

Solutions to the looming crisis in science

In: science politics • Tags: journal rank, libraries, open access, publishing, SciELO

This post was originally published on the London School of Economics “Impact of Social Sciences” blog, on July 30, 2013:

In various fields of scholarship, scholars accrue reputation via the proxy of the containers they publish their articles in. In most if not all fields, scholarly journals are ranked in a hierarchy of prestige. This prestige is reflected in the rejection rates of the individual journals: the more prestigious a journal, the more authors desire publishing in it, the higher the rejection rate.

However, much like the potential patrons lining up in front of a new – but perhaps empty – club, or the peasants stealing potatoes from Parmentier’s fields at Sablon, only few scholars are asking if there are any other indicators of quality that correlate with journal rank, besides perceived or manufactured exclusivity. A recent overview of the work of these few scholars revealed that very few measures showed any correlation at all: neither was the methodology any more sound in higher ranking journals, nor do they fare better at replication tests. A few measures, such as crystallographic quality, effect size accuracy or sample size have been reported to correlate negatively with journal rank, while the literature does not contain a single measure related to quality that correlates positively with journal rank.

In the light of such data, it is perhaps not surprising, that one of the strongest correlations of journal rank is that with retractions: high-ranking journals publish research that is a lot less reliable than that in other journals: the combination of prestige attracting surprising and counterintuitive discoveries, combined with not only average (perhaps even sub-par) quality, but also increased readership and hence scrutiny is a recipe for disaster. The data in plain words: scientific top journals are like tabloids: widely read, but not necessarily trustworthy.

These data provide empirical evidence to support the hypothesis that the incentive structure in science can be blamed for much of the alarming trends of the past decades, such as the exponentially increasing retraction rates, ever larger and more frequent cases of fraud and the replicability crisis. Apparently, the current system of journal prestige favors scientists who are effective at marketing their research to prestigious journals, but not those whose research is reliable. In addition, the prestige bestowed by journal rank is what prevents authors from seeking alternative publishing venues (or they would risk their careers), contributing to the serials crisis of legacy publishers charging exorbitantly for subscriptions.

Figure 1: Current Trends in the Reliability of Science

bjoern figure 1
(A) Exponential fit for PubMed retraction notices (data from pmretract.heroku.com). (B) Relationship between year of publication and individual study effect size. Data are taken from Munafò et al. (2007), and represent candidate gene studies of the association between DRD2 genotype and alcoholism. The effect size (y-axis) represents the individual study effect size (odds ratio; OR), on a log-scale. This is plotted against the year of publication of the study (x-axis). The size of the circle is proportional to the IF of the journal the individual study was published in. Effect size is significantly negatively correlated with year of publication. (C) Relationship between IF and extent to which an individual study overestimates the likely true effect. Data are taken from Munafò et al. (2009), and represent candidate gene studies of a number of gene-phenotype associations of psychiatric phenotypes. The bias score (y-axis) represents the effect size of the individual study divided by the pooled effect size estimated indicated by meta-analysis, on a log-scale. Therefore, a value greater than zero indicates that the study provided an over-estimate of the likely true effect size. This is plotted against the IF of the journal the study was published in (x-axis), on a log-scale. The size of the circle is proportional to the sample size of the individual study. Bias score is significantly positively correlated with IF, sample size significantly negatively. (D) Linear regression with confidence intervals between IF and Fang and Casadevall’s Retraction Index (data provided by Fang and Casadevall, 2011).


Thus, the picture emerges that by reforming the scholarly publishing structure, we can solve several current issues at once:

  1. We can reward reproducible discoveries, rather than publications, increasing scientific reliability.
  2. We can save lives and boost the world’s economies by providing universal open access to all of the scholarly literature
  3. We can save billions every year in subscription costs

What do we need to do to accomplish this? One solution is to use the already existing infrastructure and know-how in our scholarly institutions: collectively, our libraries and computing centers have all it takes to serve the world not only the two million new publications every year, but also all the literature from the past. There is no reason why publishers should be required for this task any more – libraries around the world are already publishing the works of their faculty. The SciELO (Scientific Electronic Library Online), originally from Brazil, but now used in most Latin American countries and South Africa, is a cooperative electronic publishing system which now publishes over 900 open access journals at a cost of just US$90 per article. Compare these US$90 to the US$4000 per article we currently pay on average for legacy publishing or with the US$30,000-40,000 for articles in the most prestigious journals. As mentioned above, the evidence suggests that this gigantic surplus is not buying any added value. With a total yearly revenue of legacy publishing around US$10 billion, the scholars are wasting approximately US$9.8 billion every year, compared to an institution-based publishing system like SciELO, and even more if we would use preprint systems such as Arxiv.

The success of SciELO demonstrates that such an alternative is technically feasible, scalable, significantly cheaper than our current model, and would provide universal open access to the whole planet if the rest of the world would adopt it. Thus, universal adoption of SciELO alone would cover points 2 and 3. Obviously, this is not only one of several solutions, but also an example of how easy it is technically to effectively reform scholarly publishing.

The remaining first point can be addressed by identifying alternative metrics to accrue reputation. Once we have full access to all the literature, we can use the scientific method not only for the identification of such metrics, but we can also use this method to test how to ideally design a reputation system that aligns the incentives for each individual researcher with science and the public. Once we have taken back the control over the fruits of our labor: literature, data and software, we can reward scientists who excel at designing software that evaluates data, who are unsurpassed at collecting and curating large amounts of data, who shine with their talent for experimental design and consequently generate reproducible research, but who are not necessarily the best salesmen. We can reward the crucial personnel without which the scientific enterprise could not exist, but who currently fall through the cracks.

These ideas demonstrate that by overcoming journal rank and replacing it with a scientific reputation system as part of an institution-based publishing service for scholarly literature, software and data, we could collectively free more than US$9b every year for science and innovation. By further delaying publishing reform, we not only keep wasting tax-payer money, we also continue to reward salesmen who may possibly also be great scientists (if we are lucky) and to punish excellent scientists who are not extraordinary marketers. It does not take an evolutionary biologist to predict what this sort of selection will do to the scientific community within only a few generations.

Like this:

Like Loading...
Posted on July 31, 2013 at 10:48 Comments Off on Solutions to the looming crisis in science
Jul24

Publisher selects the best open access science – authors complain

In: science politics • Tags: impact factor, journal rank, libraries, open access, publishing

You’d be forgiven if after reading the title of this post, you thought scholars have started to revolt against journal rank. Unfortunately, while there is DORA and of course the evidence that journal rank is like homeopathy, most researchers are still fine with ex-scientists rejecting 92% of all submitted articles and charging a grand sum of more than US$30,000 per article that they select on a whim. Never mind the unnecessary delay of science and the agony not only of the rejected authors but also of the reviewers who need to re-review the papers as they trickle down the hierarchy. No, hardly anybody thinks 30k per article is too much, just because some ex-scientist is selecting the work. Regular journals publish at costs of “a few hundred dollars“, so the only justification for the 30k price tag is the selection.

Essentially, almost everybody’s fine with paying 30k per article for a selection process devoid of any evidence that it actually improves anything.

What some authors and colleagues are upset about, is that a new publisher, Apple Academic Press, is selling selected open access papers as books – for around US$100 a pop. Granted, there is a lot to criticize about this particular publisher’s stunt: they repackaged a selection of open access (CC-BY licensed) articles without the knowledge of the authors, they changed some titles and hid the original publisher in the acknowledgment section of the papers and obviously didn’t send the authors complimentary copies. There is no question that this is some really horrible style. However, compared to $30k an article or several hundred each year for a subscription, $100 for a whole book seems like a fair deal. Given that these books seem to cover highly specialized areas, featuring articles that are available for free, one wouldn’t expect they will sell much, so the actual cost per article in each book is miniscule. So essentially, the $100 are only for someone to do the selecting, which is precisely what the ex-scientists at the GlamMagz do to the detriment of science as well as researchers and at much, much higher financial costs to boot.

Balancing what ‘traditional’ publishers do to science and scientists on a daily basis with the listed shortcomings of this new strategy by Apple AP, I can’t help but wonder what people get so upset about? Of course, authors ought to be notified, but if we got upset about everything publishers ought to do, nobody would be able to calm down ever again. Of course, the publisher should show that the papers have been published before, but that info is in the acknowledgments. Moreover, the price is a real bargain, compared to other ‘selection services’ and you don’t even have to pay it as long as you get the list of titles. Anybody who gets upset about such bad style, ought to be ragingly mad at what GlamMag publishers do.

Clearly, to emphasize it again, this new procedure leaves much room for improvement. However, in principle, this is exactly what we want: someone serving as a filter for selecting the most relevant discoveries – but after the publication, not before, when it slows everything down and clogs the system. With more and more of these services, they can compete with each other and develop track records that can be compared. There will be some that cater to the public, others to clinicians or other professional sections of the public, again others will cater to scientists. This is precisely the kind of re-use CC-BY had in mind. Apple AP obviously didn’t win a prize for the most innovative post-publication review system, but it’s a beginning of what open access advocates have been proposing for almost two decades now: open access and wide commercial re-use of publicly funded research to the benefit of the public. Imagine if some whiz kid picked up one of these books from a public library or some other place where it was lying around and developed a new cancer diagnostic!

For sure, nobody needs to be enthusiastic about this particular instance, but every open access proponent should embrace, not condemn these developments, if just for the potential they carry.

UPDATE: after some Twitter discussion, here’s a suggestion for how the execution of this service might be improved: the publisher could just offer the list of titles for sale with a print-on-demand option. If their selection if worth anything, they should find customers willing to pay for it.

Like this:

Like Loading...
Posted on July 24, 2013 at 21:50 6 Comments
  • Page 19 of 22
  • « First
  • «
  • 17
  • 18
  • 19
  • 20
  • 21
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,084 views)
  • Sci-Hub as necessary, effective civil disobedience (23,038 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,918 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d