bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 170 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 88 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 196 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 502 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 748 downloads 0.00 KB
Download
Apr29

A study justifying waste of tax-funds?

In: science politics • Tags: open access, publishing

Open Access (OA) pioneer and OA journal eLife founding member and sponsor, the Max Planck Society just released a white paper (PDF) analyzing open access costs in various countries and institutions and comparing them to subscription costs. Such studies are fundamental prerequisites for evidence-based policies and informed decisions on how to proceed with bitterly needed reforms. The authors confirm the currently most often cited ballpark figures of a world-wide annual academic publishing volume around US$10b, averaging at around US$5000 for each of the approximately 2 million papers published every year. This confirmation from different sources than are usually cited is very valuable and solidifies our knowledge on the kind of funds available to the system.

The authors detail that various institutions in various countries spend significantly less than the current subscription costs on their current author processing charges (APCs) for publishing in open access journals, around US$2000-3000 per article. They conclude from these data that a conversion from subscription to author-pays model would be at least cost-neutral (if not a significant cost-saver) and keep the publishing industry alive.

I find these statements quite startling for a number of reasons:

  1. Over 15 years ago, the US government (via the NIH) helped Brazil develop an incredibly successful publishing model, SciELO. It has since spread, with many other countries all over the globe joining. In their now roughly 900 journals, SciELO publishes peer-reviewed papers, fully open-access at an average cost of US$90 per article. Recently, these figures have been confirmed with numbers from the NIH’s open access repository PubMedCentral, where such costs come to lie around US$50 per article. Thus, publishing fully open access with all the features known from commercial publishers clocks in at below US$100 per article. This we already knew before this study. Why was there a study needed, that shows that we can also get such universal open access for up to 100 times the price of PMC/SciELO? Is the survival of the publishing industry really worth up to US$9.9b in subsidies every year? What value do publishers add, that could possibly be worth the annual bill of 9.9 billion in virtually any currency?
  2. The authors emphasize that “Whether calculated as mean or median, however, the average APC index will never be dictated by the high-end values.” This may of course be financially relevant for the tax-payer in the short-term, but in the long-term the tax-payer will also be interested in whether the science they fund is reliable: is publicly funded science a good bang for the buck? If we only were to convert to this ‘gold’ OA model and left everything else virtually unchanged, the situation for the reliability and hence credibility of publicly funded science would be even worse than it is today. As outlined in detail elsewhere, high-ranking journals argue that their APCs will come to lie around US$50,000 per article. While this may indeed not change the average cost to the taxpayer with currently in excess of 30,000 journals, it will mean that in addition to knowing the professional editor and, if needed, fake your data, you then also would have to be rich (or work at a rich institution) in order to publish in a venue that helps secure a job in science. Given that these journals publish the least reliable science, this would be the one single scenario I could imagine, that would be even worse for science than the status quo.
  3. The authors also do not mention that the large majority of open access journals (including Max Planck Society’s very own eLife) do not charge any APCs at all (an issue already raised by Peter Suber). It is not clear from the study if articles published in these journals have been counted at all. If not, their costs are overestimating the actual costs by a significant factor.

Thus, as I see it, this is a study that at best serves no real purpose, at worst constitutes a disservice to science by suggesting such a transition would even be desirable, when it clearly is not. I have asked one of the co-authors of the study, Kai Geschuhn to comment on my criticisms. You can find her reply below, I’ll leave it uncommented:

Like it or not, offsetting subscription costs against publication fees still isn’t the common understanding of how to finance open access. With this study, we didn’t want to raise the question whether scientific publishing should cost US$50, US$100 or US$5,000 per article. The aim rather was to show that the transition to open access is feasible already. The figures presented in the paper relate current subscription costs to scientific article outputs on different levels (global, national, and institutional) in order to show that there is enough money in the system to finance all of these articles. While this is obvious to you, it is often not to libraries which usually expect the open access transition to become even more expensive. This misconception is mostly due to the assumption, that the total number of publications from an institution or a country would have to be financed. We suggest calculating with articles from corresponding authors only, which usually leads to a reduction of up to 50% of the total amount.
After ten years of debate, we finally need to agree upon a realizable first step. We believe that offsetting budgets actually is key to this so we have to start the calculation.

Like this:

Like Loading...
Posted on April 29, 2015 at 17:14 17 Comments
Apr27

What should a modern scientific infrastructure look like?

In: science politics • Tags: infrastructure, open science, peer-review, publishing

For ages I have been planning to collect some of the main aspects I would like to see improved in an upgrade to the disaster we so euphemistically call an academic publishing system. In this post I’ll try to briefly sketch some of the main issues, from several different perspectives.

As a reader:

I’d like to get a newspaper each morning that tells me about the latest developments, both in terms of general science news (aka. gossip) as well as inside my scientific fields of interest. For the past 5+ years, my paper.li has been doing a pretty decent job at collecting the gossip, but for the primary literature relevant to my field, such a technology is sorely missing. I’d like to know which papers my colleagues are reading, citing and recommending the most. Such a service would also learn from what I click on, what I recommend and what I cite, to assist me in my choices. Some of these aspects are starting to be addressed by companies such as F1000 or Google Scholar, but there is no comprehensive service that covers all the literature with all the bells and whistles in a single place. We have started to address this by developing an open source RSS reader (a feedly clone) with a plug-in functionality to allow for all the different features, but development has halted there for a while now. So far, the alpha version can sort and filter feeds according to certain keywords and display a page with the most tweeted links, so it’s already better than feedly in that respect, but it is still alpha software. All of the functionalities I want, have already been developed somewhere, so we’d only need to leverage them for the scientific literature.

In such a learning service, it would also be of lesser importance if work was traditionally peer-reviewed or not: I can simply adjust for which areas I’d like to only see peer-reviewed research and which publications are close enough that I want to see them before peer-review – I might want to review them myself. In this case, peer-review is as important as I, as a reader, want to make it. Further diminishing the role of traditional peer-review are additional layers of  selection and filtering I can implement. For instance, I would be able to select fields where I only want recommended literature to be shown, or cited literature, or only reviews, not primary research. And so forth, there would be many layers of filtering/sorting which I could use flexibly to only see relevant research for breakfast.

I admit it, I’m a fan of Lens. This is an excellent example of how scientific content should be displayed on a screen. With a modern infrastructure, we get to choose which way we would like to read, Lens would not be the only option besides emulating paper. Especially when thorough reading and critical thinking are required, such as during the review of manuscripts or grant proposals, ease of reading and navigating the document is key to an efficient review process. In the environment we already should have today, reviewers would be able to pick the for them most efficient way of thoroughly fine-combing a document.

We would also be able to click on “experiments were performed as previously described” and then directly read the exact descriptions of how these experiments were done, because we would have finally have implemented a technology from 1968, hyperlinks. Fully implementing hyperlinks would also provide the possibility to use annotations to the literature: such annotations, placed while reading, can later be used as anchors for citations. Obviously, we’d be using a citation-typology in order to make the kind of citation (e.g. affirmative or dismissive, etc.) we intended machine readable.

Of course, I would also be able to double-click on any figure to have a look at other aspects of the data, e.g. different intervals, different intersections, different sub-plots. I’d be able to use the raw data associated with the publication to plot virtually any graph from the data, not just those the authors offer me as a static image, as today. How can this be done? This brings me to the next aspect:

As an author:

As an author, I want my data to be taken care of by my institution: I want to install their client to make sure every piece of data I put on my ‘data’ drive will automatically be placed in a data repository with unique identifiers. The default setting for my repository may be open and a CC0 license, or set manually to any level of secrecy I’m allowed to or intend. The same ought to be a matter of course for the software we write. In today’s day and age, institutions should provide an infrastructure that makes version-controlled software development and publishing seamless and effortless. And yet, we, the scientists, have to ask our funders for money to implement such technology. Likewise for authoring: we need online authoring tools that can handle and version-control documents edited, simultaneously, by multiple authors, including drag and drop reference managing. GDocs have been around for a decade if not more and FidusWriter or Authorea are pioneering this field for scientific writing, but we should already have this at our institutions by default today (with local copies, obviously).

If we had  such GitHub-like infrastructure, a figshare/DropBox  combo that took care of our data and an Authorea/FidusWriter authoring environment, we could routinely do what we have done as a proof of principle in our latest paper: When you write the paper, you don’t have to artificially design any actual figures any more. The authors just insert the code that calls the software to evaluate the linked, open data. This allows the reader to not only generate their own figures from different perspective from our data (as in Fig. 3 of our paper), they can also download all the code and data without asking us and without us having to jump through any extra hoops to make our code/data available – it all happens on the invisible back-end. Had we been able to use Authorea/FidusWriter, submission would even have been just a single click. I get furious every time I estimate the amount of time and effort I could save if this infrastructure were in place today, as it should be.

Another thing one could do with such an infrastructure would be to open up certain datasets (and hence figures) to contributions from other labs, e.g. to let others compare their own results with yours. We demonstrated this “hey look what we found, how does that look for you?” kind of functionality in Fig. 4.

More or less automated semantic tagging would allow us to leverage the full potential of semantic web technology in order to facilitate some of the features I haven’t yet been able to imagine.

As a reviewer:

A reviewer is a special kind of reader, quite obviously. As such, all the above-mentioned features would also benefit the reviewer. However, there is a feature that is special for the reviewer: direct, if need be anonymized discussions with the author of a manuscript or proposal under review. Of course, this discussion would be available with the final version of the paper, where appropriate. In this discussion, the reviewers (invited, suggested and voluntary) and authors would be working on a fully annotated version of the manuscript, significantly reducing the time required for reviewing and revising manuscripts. Editors would only ever come in to help solve any points of contention that cannot be resolved by reviewers/authors themselves. Some publishers already implement such discussion to some extent, but none that I know of use an authoring environment, as would be the rational solution.

As an evaluator:

There is no way around reading publications in order to evaluate the work of scientists. There are no shortcuts and no substitutes. Reading publications is a necessary condition, a conditio sine qua non, for any such evaluation. However, it is only a sufficient criterion in the best of all worlds. Only in a world without bias, misogyny, nepotism, greed, envy and other human shortcomings, would reading publications be sufficient for evaluating scientific work. Unfortunately, some may say, scientists are humans and not immune to human shortcomings. Therefore (and because the genie is out of the bottle), we need to supplement expert judgment with other criteria. These criteria, of course, need to be vetted by the scientific method. The current method of ranking journals and then ranking scientists according to where they know the editors of such journals is both anti-scientific and counter-productive.

If we had a fully functional infrastructure possible with today’s technology, we’d be able to collect data from each scientist with regard to their productivity (data, code, publications, reviews), popularity (downloads, media presence, citations, recommendations), teaching (hours, topics, teaching material) or service (committees, administration, development). To the extent that this is (semi-)automatically possible, one could even collect data about the methodological soundness of the research. If we, as a scientific community, hadn’t spent the last 20 years in a digital cave, we’d be discussing about the ethics of collecting such data, about how these data are or are not correlated with one another, about the degree of predictive power of some of these data for future research success and other such matters – and not about how we one day might be able to actually arrive in the 21st century.

—

All of the functionalities mentioned above (and many more I haven’t mentioned) are already being tried here and there to various degrees and in various combinations. However, as standalone products none of them are really going to ever be more than just interesting ideas, proofs of concept and demonstrations. What is required is an integrated, federated backbone infrastructure with a central set of standards, into which such functionalities can be incorporated as plug-ins (or ‘apps’). What we need for this infrastructure is a set of open, evolvable rules or standards, analogous to TCP/IP, HTTP and HTML, which can be used to leverage key technologies for the entire community at the point of development – and not after decades of struggle against corporate interests, legal constraints or mere incompetence.

It is also clear that this infrastructure needs to be built robustly. Such a core infrastructure cannot rely on project funds or depend on the whims of individual organizations, even if they be governments. In fact, the ideal case would be a solution similar or analogous to BitTorrent technology: a world-wide, shared infrastructure where 100% of the content remains accessible and operational even when 2/3 of the nodes go offline. Besides protecting scholarly knowledge against funding issues and the fickleness of individual organizations, such a back-end design could also protect against catastrophic regional devastations, be they natural or human made.

Such technology, I think that much is clear and noncontroversial, is readily available. The money, is currently locked up in subscription funds, but cancellations on a massive scale are feasible without disrupting access much and will bring in just over US$9b annually – more than enough to build this infrastructure within a very short timeframe. Thus, with money and technology readily available, what’s keeping the scientific community from letting go of antiquated journal technology and embracing a modern scholarly communication infrastructure? I’ve mentioned human shortcomings above. Perhaps it is also an all too human shortcoming to see the obstacles towards such a modern infrastructure, rather than its potential:

busy

Or, as one could also put it, more scientifically: “The square traversal process has been the foundation of scholarly communication for 400 years!”

@brembs "The square traversal process has been the foundation of scholarly communication for 400 years."

— Ian McCullough is @bookscout (@bookscout) April 27, 2015

UPDATE (11/04/2017): There has been a recent suggestion as to how such an infrastructure ma be implemented conceptually. It still contains the notion of ‘journals’, but the layered structure explains quite well how this may work:

Like this:

Like Loading...
Posted on April 27, 2015 at 13:50 211 Comments
Apr21

If only all science were this reproducible

In: own data • Tags: Drosophila, foraging, reproducibility

ResearchBlogging.orgFor our course this year I was planning a standard neurogenetic experiment. I hadn’t ever done this experiment in a course, yet, just two weeks ago I tried it once myself, with an N=1. The students would get two groups of Drosophila fruit fly larvae, rovers and sitters (they wouldn’t know which was which). About ten larvae from each group would be placed on one of two yeast patches on an agar plate. After 20 minutes, they would count the number of larvae in the first patch, the second patch and those in none of the patches, i.e, elsewhere on the plate:

Classic experiment designed by Marla Sokolowski, the discoverer of the rover/sitter polymorphism

Classic experiment designed by Marla Sokolowski, the discoverer of the rover/sitter polymorphism

In the example above, the scores would be 3, 3 and 4 for the rovers and 10, 0, 0 for the sitters.

Yesterday morning, before the course, when I was collecting the vials for the experiment, I saw that I had an additional vial of each stock where I had forgotten to remove the parent flies, such that the vials were completely overcrowded. Remembering that my last slide in the lecture for the course was a result from Kaun et al. 2007, where the authors had discovered that the behavior of the larvae was dependent on food quality, I felt we should try and see if this overcrowding had already deteriorated the food enough to show the effect Kaun et al. had described: that rovers became more like sitters and didn’t leave the food patch they were placed in any more.

So I grabbed the two overcrowded vials and decided to try something that usually would be a recipe for disaster: in an experiment never tried on students, alter the conditions such that you wouldn’t know the outcome yourself. The plan was that in case we wouldn’t see a strong difference between the two strains, I’d go and fetch the two vials I had prepared with the correct density of larvae and hence proper food quality. With these latter flies, at least, we should see the difference between rovers and sitters. Here are the data the 12 students collected on the blackboard from the experiments done on a single afternoon:

Photograph of the balckboard with all the data from the 12 students.

Photograph of the blackboard with all the data from the 12 students.

One can already see from the numbers alone that the difference between the strains becomes very clear in the low-density case (blue) while it is less pronounced in the case of the overcrowded larvae (white). But because the result so clearly matches the results from Kaun et al, I’ve plotted the data and compared them side-by side.

First the data from Kaun et al. which show that with low food quality (15% – corresponding to our high-density larvae), rover and sitter larvae consume about the same amount of food and both more than with perfect quality food (100% – corresponding to our low density conditions):

Data from Kaun et al. 2007 on the effect of food quality of food intake in rover and sitter fruit flies.

Data from Kaun et al. 2007 on the effect of food quality of food intake in rover and sitter fruit flies.

As we didn’t measure food intake, but just the location of the larvae, we’ll take the percentage of larvae on the first patch as a measure of staying and feeding instead of leaving the patch and searching for another food patch. That way, we got an all but identical graph:

Data from 12 students on a single afternoon

Data from 12 students on a single afternoon

This replication provided an opportunity to emphasize two general points: first, it is highly unusual to reproduce previously published data with such ease and to such an astonishing degree. Second, I reiterated what I had already said during the lecture: there is no genetic determinism. Even humble fruit fly larvae show that different genotypes do not necessarily mean different fate – it depends strongly on the environment, if any genotypic differences manifest themselves in phenotypic differences.


Kaun, K., Riedl, C., Chakaborty-Chatterjee, M., Belay, A., Douglas, S., Gibbs, A., & Sokolowski, M. (2007). Natural variation in food acquisition mediated via a Drosophila cGMP-dependent protein kinase Journal of Experimental Biology, 210 (20), 3547-3558 DOI: 10.1242/​jeb.006924

Like this:

Like Loading...
Posted on April 21, 2015 at 14:36 7 Comments
Apr14

Nature reviewers endorse hype

In: science politics • Tags: GlamMagz, publishing

In a paper published in Nature Neuroscience now over a year ago, the authors claimed to have found a very surprising feature, which was long thought to be a bug. In my blog post covering the hype in the paper and that in an accompanying news-type article, I wrote that today, it has become ever rarer that scientists admit to standing on the shoulders of giants, as we are not rewarded for referring to giants, but only for being giants ourselves. The blog post has triggered some email correspondence between a number of colleagues in or close to this field, at the end of which was the decision that two of us, Nicolas Stergiou and I, would contact the journal and inform them of the missing references in the two articles.

The first reply was that we ought to write a ‘letter to the editor’ instead of informally notifying the journal that there were some crucial references missing. So we sat down and wrote our letter (submitted February 10, 2015):

Dear Sir,

HHMI Investigator Michael Eisen recently wrote that we ought not only to come down hard on people who cheat but also on the far great number of people who overhype their results. He suggests that hyping should be punished just as fraud. This letter to the editor represents such an effort as we would like to report on such a case of hyping.

“Standing on the shoulders of giants” is what scientists say to acknowledge the work they are building on. It is a statement of humility and mostly accompanied by citations to the primary literature preceding the current work. In today’s competitive scientific enterprise, however, such humility appears completely misplaced. Instead, what many assume to be required in the struggle to survive is to convince everyone that they are the giant, the genius, the prodigy who is deserving of the research funds, the next position, tenure. The Nature Neuroscience article “Temporal structure of motor variability is dynamically regulated and predicts motor learning ability” by Wu et al. (doi:10.1038/nn.3616) with its accompanying news-type article “Motor variability is not noise, but grist for the learning mill” by Herzfeld and Shadmehr (doi:10.1038/nn.3633) from earlier this year clearly fall within this category. Both articles claim that the researchers have made the game-changing discovery that something long thought to be a bug in our movement system is actually a spectacular feature. It is argued that this discovery is such a huge surprise, because nobody in their right mind would have ever thought this “unwanted characteristic” to actually serve some purpose.

The problem with this line of argument is that probably most people in the field thought it should be obvious, even to be expected – and not surprising at all. Skinner is largely credited with the analogy of operant conditioning and evolution. This analogy entails that reward and punishment act on behaviors like selection is acting on mutations in evolution: an animal behaves variably and encounters a reward after it initiated a particular action. This reward will make the action now more likely to occur in the future, just as selection will make certain alleles more frequent in a population. Already in 1981, Skinner called this “Selection by Consequences“ (Science Vol. 213 no. 4507 pp. 501-504, DOI: 10.1126/science.7244649). Skinner’s analogy sparked wide interest, e.g. an entire journal issue (Behavioral and Brain Sciences 7(04), 1984), which later appeared in book form (The Selection of Behavior: The Operant Behaviorism of B. F. Skinner: Comments and Consequences. A. Charles Catania, Stevan R. Harnad, Cambridge University Press). Clearly, the idea that reinforcement selects from a variation of different behaviors is not a novel concept at all, but more than three decades old and rather prominent. This analogy cannot have escaped anybody working on any kind of operant/motor learning, except those seriously neglecting the most relevant literature. This interaction of variability and selection is a well-known and not overly complicated concept, based in evolutionary biology and psychology/neuroscience. Consequently, numerous laboratories have been studying various aspects of this interaction for a long time. Skinner’s projection was that increased behavioral variability leads to increased operant learning rates, just like increased mutations rates lead to increased rates of evolutionary change. More than a dozen years ago, Allen Neuringer showed this to be the case in rats (Psychonomic Bulletin & Review 2002, 9 (2), 250-258, doi: 10.3758/BF03196279), but there are studies in humans as well (Shea, J. B., & Morgan, R. B. (1979). Contextual interference effects on the acquisition, retention, and transfer of a motor skill. Journal of Experimental Psychology: Human Learning and Memory, 5, 179–187). That such variability is beneficial, rather than detrimental has been shown even in situations where the variability is so high, that the acquisition rate is reduced, but post-training performance is enhanced (Schmidt RA, Bjork RA (1992): New conceptualizations of practice: Common Principles in Three Paradigms Suggest New Concepts for training. Psychological Science, 3(4): 207-217).

Wu et al. confirm both Skinner’s conjecture as well as previously published reports (some cited above) that indeed the rate of learning in operant conditioning is increased in subjects where the initial variability in the behavior is higher. However, instead of citing the wealth of earlier work, Wu et al. claim that their results were surprising: “Surprisingly, we found that higher levels of task-relevant motor variability predicted faster learning”. Herzfeld and Shadmehr were similarly stunned: “These results provide intriguing evidence that some of the motor variability commonly attributed to unwanted noise is in fact exploration in motor command space.”

We regard it as highly unlikely that none of the seven authors in total should have never heard of Skinner or the work over the last four decades by many human movement scientists that have explored the temporal structure of human movement variability and its relationship with motor learning. The work by senior scientists such as Karl Newell, Michael Turvey, Richard Schmidt, and their students published in books and hundreds of journal articles is completely ignored, just as the work by several younger mid-career scientists such as Nick Stergiou, Jeff Hausdorff, Thurmon Lockhart, Didier Dilignieres, and many others. After a thorough review of this literature the authors may realize that their results are neither new nor novel. If indeed the authors were unaware of this entire section of literature so relevant to their own research, it would be an indictment in its own right.

Hence, in formal corrections of both articles, we would expect all mentions throughout both texts of how surprising these findings were, to be replaced with references including, but not limited to the works cited above.

Yours sincerely,

In hindsight, we probably ought to have mentioned that the research of course has merit and that there are a lot of valuable and exciting results in the paper, but that it is one specific aspect that is really not surprising at all. However, we wanted to make it brief and focused just on the hype. Maybe that was a mistake. Be that as it may, after about two months of peer review, our letter was rejected. Perhaps not so surprisingly, the peer reviewers apparently were the same ones that reviewed the original manuscript (at least one of them):

This is the basis on which I recommended publication, and I do not feel it needs a corrigendum.
After all, the data indicate that such high-ranking journals do not use very rigorous peer-review, leading to the least reliable science being published there. What was quite surprising, though, was the implicit support and even endorsement of the hype:
I agree that the article’s tone is a little more breathless than strictly required, but this is the style presently in vogue
and
The letter complains about “over-hyping” about certain claims made in the paper related to previous work. I have some sympathy with the letter writers on this front. However, I also have some sympathy for the authors, who understandably were trying to emphasize the novelty of their work.
It thus appears as if at least two of the three reviewers agreed with us, that the articles were hyping the results, but they don’t see anything wrong with hype. This means that hype is not just a problem of journalists any more. Hype is not just a problem of journals and GlamMagz any more. Hype has now arrived in the middle of the scientific community with explicit endorsement from reviewers. It looks like Michael Eisen’s call for hype-punishment will go largely unheeded.
You can download the complete reviews of all reviewers here (PDF), to make sure none of the quotes above were taken out of context.

Like this:

Like Loading...
Posted on April 14, 2015 at 10:21 32 Comments
Apr09

Why this GlamMag will likely not ask for my review again

In: personal • Tags: GlamMagz, peer-review, publishing

I really loathe reviewing for GlamMagz for two main reasons. For one, it’s hard to remain neutral: publication of a paper in my field in such a journal is beneficial both for the field and for the young people who are authors on this paper. Second, the demands of some of my colleagues so often make my blood boil. At that point I’m very happy these reviews are anonymous and I really don’t want to know the names of these colleagues. Here are some of the things the reviewers wrote in this most recent round:

“the authors have not convinced me of the conceptual novelty of their findings to warrant publication in a very top-tier journal”

“appears standard for a top-tier journal”

“However, currently, this manuscript is more suited for a specialized journal”

This seems to suggest to me that these colleagues apply different standards for different journals. So when the journal sent each reviewer the reviews of the other reviewers and asked for comments, one of my comments was the following:

The intuitive notion of journal rank is a figment of our imagination, devoid of any empirical support. So-called “top-tier journals” exist in the same reality in which homeopathy cures, dowsing rods find water and astrologers correctly predict the future. Asking for additional ‘interesting’ or ‘curious to know’ experiments merely because of the venue the authors chose borders on the unethical, IMHO, as there are millions of ‘curious to know’ experiments to be potentially carried out. Reviewers ought to try and avoid multiple standards in their reviews and restrain additional experiments to the minimum required by the statements of the authors. Professional editors are being paid precisely because they can predict whether their readers will find the statements of the authors ‘interesting’ or ‘curious to know’.

(Where ‘curious to know’ was a phrase one of the reviewers used whenever they wanted some tangentially related experiment to be done)

I doubt the editors at GlamMagz are very interested in these sorts of comments. If they are, I now have a boilerplate from which to draw future comments.

Like this:

Like Loading...
Posted on April 9, 2015 at 14:33 13 Comments
Mar31

Is DIY really just for the scholarly poor?

In: news • Tags: DIY, equipment, science

Writing in the latest issue of Lab Times, Alex Reis portraits two sections of ‘do-it-yourself’ in the biosciences. One is the group of ‘citizen scientists’, some of which are organized in DIYbio. The other group covered is that of cash-strapped biologists who create “low-cost customized devices” “out of necessity”, instead of “heading for the nearest catalogue to find the best equipment to buy”.

I’m not so much concerned with the attitude that the catalogues apparently hold the “best” equipment – as opposed to that equipment which will make grants more expensive and hence pull in more overhead for the university and more prestige for the PI. I’m more concerned with the impression this article gives that only the scholarly poor need to resort to DIY, whereas the first-world, well-funded, top-ranked laboratories of course always buy the best equipment from the catalogues for their cutting edge, world-class science.

Instead of denigrating laboratories who try to refrain from wasting tax funds on overpriced equipment, shouldn’t one instead ask what kind of research this is, where the equipment is already being sold by for-profit companies? To look for the one thing that hasn’t been put under a microscope, yet? To sequence the one gene or genome that hasn’t been sequenced, yet? To amplify the one sequence that hasn’t seen a PCR machine, yet? To obtain a band from the one protein that hasn’t been sent through a gel, yet? To spin the one liquid that hasn’t seen the inside of a salad spinner, yet? I’m exaggerating and oversimplifying, of course, but to make a point.

Quite logically, if you look at things nobody has looked at before, there cannot exist a company that provides you with a handy machine, so you just have to build the equipment or reagent yourself. Thus, in fact, every cutting-edge science by definition has to be DIY. The super-resolution microscopes for which this year’s Nobel was awarded couldn’t be bought in a store: Betzig, Hell, Moerner and colleagues had to build them themselves. If you can buy it in a store, also by definition, someone must have looked at something like this before and you’re just following in their footsteps.

One may argue, that perhaps most, if not all breakthrough science must be DIY, simply because you cannot sell equipment that doesn’t exist.


P.S.: Obviously, this post is not meant to denigrate all my many colleagues who buy all of their equipment or reagents. This research is of course very valuable and also in our lab we mostly use equipment that was designed by others than ourselves, even if it cannot be bought, and only rarely design it ourselves. I object to silly rankings and trivial comparisons in general and I only want to point out that it is very easy to argue in exactly the opposite way to counter the impression that this article is giving.

Like this:

Like Loading...
Posted on March 31, 2015 at 13:41 20 Comments
Mar26

Watching a paradigm shift in neuroscience

In: science • Tags: Bargmann, C. elegans, circuits, ongoing activity, spontaneity, variability

ResearchBlogging.orgWhen I finished my PhD 15 years ago, the neurosciences defined the main function of brains in terms of processing input to compute output: “brain function is ultimately best understood in terms of input/output transformations and how they are produced” wrote Mike Mauk in 2000 (DOI: 10.1038/76606). Since then, a lot of things have been discovered that make this stimulus-response concept untenable and potentially based largely on laboratory artifacts.

For instance, it was discovered that the likely ancestral state of behavioral organization is one of probing the environment with ongoing, variable actions first and evaluating sensory feedback later (i.e., the inverse of stimulus response). It was found that even the most rigid and iconic of stimulus-response systems – spinal reflexes – still show rudiments of probing the environment with spontaneous, variable actions and evaluating the sensory consequences later. A recently discovered instance of a so-called ‘rare predator’ phenomenon exemplified that rigid stimulus-response coupling cannot be evolutionary stable:

Kevin Mitchell thus aptly referred to the hypothetical class of animals without unpredictability “lunch” (see his excellent article for a more verbose explanation). In humans, functional magnetic resonance imaging (fMRI) studies over the last decade and a half revealed that the human brain is far from passively waiting for stimuli, but rather constantly produces ongoing, variable activity (the so-called default mode network, DMN, in our resting state), and just shifts this activity over to other networks when we move from rest to task or switch between tasks. Tellingly, the variations in DMN activity account for a large part of our behavioral variability.

As one would expect, this dramatic shift in perspectives from input/output to output/input has led to a slew of recent publications which were not thinkable a mere 15 years ago. For instance, it was reported that rodent brains add variability to sensory input. In Aplysia, it was shown that such variability can be generated by balancing excitatory and inhibitory input, but also that individual neurons (see Fig. 4b) are capable of showing spontaneous variability in their firing patterns, even when they are isolated from the rest of the nervous system. In the most recent annual meeting of the Society for Neuroscience, where I usually only find very few presentations on ongoing activity and how it leads to variability, there now were several posters on exactly this topic, seemingly out of nowhere. The most recent publication is from an animal where the connectome is so dominated by feed-forward connections from sensory to motor neurons, that even today it would be difficult to imagine how the neurobiology underlying behavioral variability could be studied in such an animal, the nematode worm Caenorhabditis elegans: Feedback from Network States Generates Variability in a Probabilistic Olfactory Circuit.

In this paper from the laboratory of Cori Bargmann, Gordus et al. look at a circuit in the C. elegans nervous system which controls reversal behaviors (Fig. 1). The main component of the system is a neuron called AVA. When AVA is active, the animal reverses its course. Sensory input to this neuron is provided by olfactory neuron AWC. For instance, if AWC is stimulated by an attractive odorant, it stops firing, such that AVA also stops firing, making reversals less likely. Two additional neurons are involved in this circuit, AIB and RIM, and the characterization of their role in the circuit was the main result of this publication.

Fig. 1: The C. elegans reversal circuit with the number of electrical and chemical synapses between each network component

Fig. 1: The C. elegans reversal circuit with the number of electrical and chemical synapses between each network component

The first interesting observation from the circuit connectivity is that there are more connections from the sensory neuron to the AIB interneuron than to the reversal neuron AVA. This would be a major head scratcher if the main function of nervous systems were to relay sensory information to motor centers, but if sensory input merely modulates ongoing activity, even in nematodes, then this doesn’t seem so surprising any more.

Contrary to the idea that a connectome dominated by feed-forward connections from sensory to motor areas implies that it mainly computes motor output from sensory input, also the nervous system of C. elegans is best characterized by constantly changing, ongoing activity, much like all the other nervous systems previously studied in this regard. Even the small circuit studied by Gordus et al. demonstrates that:

gordus_fig2

Fig. 2: Even in the absence of sensory stimulation in immobilized animals, the activity in the reversal circuit fluctuates constantly. The Y-axis depicts fluorescence as a measure of calcium levels (i.e., activity) in the neurons.

Interestingly, the neurons exhibit a sort of binary activity state, that for the most part is either on (neuron is active) or off (neuron is inactive):

Fig.: 3: The three neurons spend most of their time either in an 'off' dtate or in an 'on' state.

Fig.: 3: The three neurons spend most of their time either in an ‘off’ state or in an ‘on’ state, as can be seen from the probability (P) of fluorescence (F).

According to this classification, there are three main states (of the eight theoretically possible) the circuit is commonly found in, mainly due to the strong correlation between neurons because of their electrical and chemical coupling: just over 60% of the time the system is in ‘all on’, roughly 20% is ‘all off’ and for the remaining 20% it is in ‘only AIB on’:

Fig. 4: The three most common network states for the reversal circuit

Fig. 4: The three most common network states for the reversal circuit

By selectively inhibiting each member of the circuit, the authors discovered that the role of AIB and RIM was to increase the variability of the reversal circuit. The input from the olfactory neuron AWC was always very precise and predictable if, e.g., an attractive odor was presented, but the activity of the reversal circuit always varied significantly and this variability was reduced if AIB or RIM were silenced. An example of how variable the response of the circuit is compared to the sensory input without any experimental manipulation of AIB or RIM can be seen in Fig. 5:

The olfactory neuron AWC responds is drastically more deterministic to an attractive odor than the components of the reversal circuit.

The olfactory neuron AWC responds is drastically more deterministic to an attractive odor than the components of the reversal circuit.

Thus, the authors make an excellent case for RIM and AIB being incorporated into the reversal circuit specifically to inject much needed variability into an otherwise maladaptively deterministic reversal circuit. Surprisingly, even though the feed-forward connections dominate the connectivity also in this little circuit, the variability provided by the feedback connections dominate an adaptive feature of the behavior, its variability. This work adds C. elegans to the elongating list of animals, whose nervous systems are organized such that ongoing activity is modulated by external stimuli. It seems, in such nervous systems, even a numerically small feedback component provides a fundamental contribution to the overall architecture. What does this mean for brains whose anatomy appears to be dominated by feedback loops?


Gordus, A., Pokala, N., Levy, S., Flavell, S., & Bargmann, C. (2015). Feedback from Network States Generates Variability in a Probabilistic Olfactory Circuit Cell DOI: 10.1016/j.cell.2015.02.018

Like this:

Like Loading...
Posted on March 26, 2015 at 12:50 100 Comments
Feb10

How not to contact faculty as student

In: I get email • Tags: animism, panpsychism

Over the weekend, I received the following short message from a Hotmail account:

Dr. Brembs,
Thank you for your work and insight. Students from my doctorate of psychology epistemological course have recently discussed your article entitled ‘Towards a scientific concept of free will as a biological trait‘.
Acknowledging my vastly inferior scientific skills and reputation compared to yourself, I must respectfully disagree, however, with your current world view – you could call it scientific or philosophical for all intents and purposes. I have nothing but respectful curiosity for your divergent opinion as a prominent scientist in an unequivocally lopsided debate favouring the quantitative materialistic and biological approach. Major ontological questions are still far from being answered.
What is your objective argument against an article such as the following one? I hope that, if you indeed have the time to read it and offer a reply, could set aside any potential bias against the other of the article should you already hold one:
https://www.sheldrake.org/about-rupert-sheldrake/blog/the-new-scientific-revolution
Sincerely,
-name redacted-
-university redacted-
Ph.D. candidate
This email is full of signs that the student may not have been all that interested in any information or discourse, but instead trying to make some sort of ‘gotcha’ statement.
Acknowledging my vastly inferior scientific skills and reputation compared to yourself, I must respectfully disagree, however, with your current world view – you could call it scientific or philosophical for all intents and purposes.
Curiously, the author first states that their scientific training is lacking and then they disagree with a scientific position. One wonders if the author often disagrees with people who they admit are more competent.
In cases where I feel incompetent, I usually try to read and learn and ask for advice before I decide I’m competent enough to disagree. So either you are incompetent and ask for advice, or you are competent and disagree. You can’t have it both ways.Moreover, the use of the word ‘current’ with ‘world view’ assumes that I were to frequently change my world view, an assumption which I find highly unusual – does anybody frequently change their world view? If so, is the author’s current world view still the same as the one they held when they wrote the message?And, perhaps most importantly, I’d definitely refrain from so transparently and clumsily trying to flatter the person I’m trying to coax into a reaction – that would just make me look completely mental (pardon the pun).Already this sentence seems to indicate that the author has some catching up to do on more than just their scientific training…
I have nothing but respectful curiosity for your divergent opinion
What now? Opinion or world view? Or are these the same for the author? And is it curiosity or disagreement? Note: If you expect to be taken seriously, at least do not contradict yourself within the first few sentences of a message.
as a prominent scientist in an unequivocally lopsided debate favouring the quantitative materialistic and biological approach. Major ontological questions are still far from being answered.
Again, a clumsy attempt at flattering me, not good. I’m also torn as to what the author is trying to say here. For one, I’m not aware of any debate as to the material nature of our world (by ‘material’ I mean physical as opposed to, say, ghosts). So far, I have never met nor read nor heard of any colleague claiming they had observed ghosts interfere with their experiments. If there actually were a debate, I’m not surprised it is so unequivocally lopsided in favor of materialism – there is no evidence for ghosts. At least, I’m glad the author also finds that here are major questions to be answered – if not, as a scientist, I’d be out of a job! What would I do then?
What is your objective argument against an article such as the following one?
It appears, the author is expecting me to use subjective arguments in response to their message. So first they clumsily attempt to flatter me by saying how superior I am and now they try to insult me by expecting me to not be able to articulate an ‘objective’ argument? Way to go confirming my suspicions that there is more amiss with this author than just their scientific training. And I’m only allowed one such argument, not several?
As I see it, there are perhaps three simple rules one could take away from this:
  1. If you really want to have information, please state what you want to know or what you don’t understand, don’t beat about the bush.
  2. Don’t contradict yourself, neither by using different words for the same thing, nor by first flattering and then insulting me.
  3. If you happen to violate rules 1, 2 or both, at least use your institutional address or you might be taken for some random cook trying to troll.

Like this:

Like Loading...
Posted on February 10, 2015 at 16:40 7 Comments
Feb06

Publishers, stop torturing your reviewers!

In: science politics • Tags: editors, Lens, peer-review, publishers

UPDATE, 10-02-2015: After a hint from a user on Twitter, I now know that it is possible to open a PDF document in several windows, one for text, one for legends and one for figures. Figures and legends occupy one virtual desktop and the text another. In this way, I can actually review on-screen, but it is one heck of a work-around and by no means convenient.


 

I cannot take it any more. This has been bugging me for more than ten years, but now I’ve finally had it. From today on, I will refuse to review any manuscripts that come in a format where I cannot easily have figures, legends and text side by side on my screen (or at least inline), like, e.g., in eLife’s Lens:

lensThe standard way of reviewing articles is to receive a PDF with all the text together first, then all the legends together below that and at the very end all the figures, one on each PDF page. This makes it virtually impossible to check the statements the authors make about their data, as one has to scroll the screen back and forth between text, legend and figure, just to find out what is displayed on the figure. Obviously, it takes ages to find the correct legend and it’s virtually impossible to find the spot where one left off to go looking for figure and legend. This needs to stop. It will stop for me. I can’t waste my time any more with such nonsense!

If you are an editor or publisher and would like me to review for your journal, make it easy and fun for me to do it and not a chore. Either develop a system like Lens, or license Lens, or ask your authors to not separate text, legends and text in their submissions, whatever! I don’t care, just stop torturing your reviewers.

Personally, I already submit my articles with figures and legends together, irrespective of what the publishers want me to do. From now on, I will not only refuse to review a manuscript where I cannot see figures, legends and text side-by-side (or inline) on my screen, I will also ignore publisher rules to separate the three in my own submissions – I don’t want the reviewers of my manuscripts to suffer more than they already have to, reviewing my work.

Like this:

Like Loading...
Posted on February 6, 2015 at 15:56 94 Comments
Feb04

Random Science Video: The value of fly research

In: random science video • Tags: Drosophila, science outreach

Like this:

Like Loading...
Posted on February 4, 2015 at 14:02 1 Comment
  • Page 13 of 22
  • « First
  • «
  • 11
  • 12
  • 13
  • 14
  • 15
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,084 views)
  • Sci-Hub as necessary, effective civil disobedience (23,038 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,918 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous biting in the marine snail Aplysia
Spontaneous biting in the marine snail Aplysia

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d