bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 111 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 409 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 680 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1604 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1557 downloads 0.00 KB
Download
Jul30

Are we paying US$3000 per article just for paywalls?

In: science politics • Tags: publishing, SciELO

This is an easy calculation: for each subscription article, we pay on average US$5000. A publicly accessible article in one of SciELO’s 900 journals costs only US$90 on average. Subtracting about 35% in publisher profits, the remaining difference between legacy and SciELO costs amount to US$3160 per article. With paywalls being the only major difference between legacy and SciELO publishing (after all, writing and peer-review is done for free by researchers for both operations), it is straightforward to conclude that about US$3000 are going towards making each article more difficult to access, than if we published it on our personal webpage. Now that is what I’d call obscene.

Just to break the costs of legacy publishing down in detail:

Publisher profits 1750
Paywalls 3160
Actual costs of typesetting, hosting, archiving, etc. 90
Sum 5000

Like this:

Like Loading...
Posted on July 30, 2014 at 18:16 33 Comments
Jul30

The way academic publishing should be

In: own data • Tags: Buridan, Canton S, Drosophila, F1000 Research, publishing

Today, our most recent paper got published, before traditional peer-review, at F1000 Research. The research is about how nominally identical fly stocks can behave completely differently even if tested by the same person in the same lab in the same test. In our case, the most commonly used wild type fly strain is called “Canton S” or CS for short (interestingly, there is no ‘Canton S’ page on Wikipedia). Virtually every Drosophila lab has a Canton S stock in their inventory, but of course, it can have been decades since these flies have seen any other Canton S flies from a different place. In evolutionary terms one would call this “reproductive isolation” meaning that there is no gene flow between the different Canton S stocks in the different labs around the world, even though they all originated from the same stock at one point and are all referenced as CS in the literature.

Reproductive isolation is one of several factors which are required for speciation. Therefore, we always kept the Canton S stocks we have received from different labs separate in our lab, to make sure we always have the appropriate wild type strain for any genetically manipulated strain we might get from that lab. In total, we had five different Canton S strains which we tested in Buridan’s paradigm:

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

To our amazement, it turned out that there were considerable differences between these nominally identical fly strains. In fact, the differences were large enough to have classified some of the strains ‘mutants’. As we have some knowledge about the ancestry and pedigree of each strain, we speculate that what created the differences between the strains to begin with are founder effects either when a sample was taken from one lab and transferred to the next, or during the history of the strain in an individual laboratory. It seems unlikely that there has been any significant adaptation to the particular laboratory environment, but at this point this is difficult to rule out conclusively.

This phenomenon has been observed in other model systems before and it is not quite clear how to solve it, as the logistics of developing a global “mother of all wild type stocks” are a nightmare. We felt that this issue was not considered enough in the Drosophila community and the Buridan results provide for an excellent case study. Especially in a time when reproducibility is on everybody’s agenda, it is crucial to know what can happen when trying to replicate a phenomenon that was observed in one Canton S strain elsewhere with the Canton S strain available in one’s own lab.

As replicability is also one of the issues of this paper, we decided to make the entire process of the paper as transparent as possible: not only is the paper published open access, it is also published before traditional peer-review. The ensuing peer-review will then be made open such that everyone can see what was changed in the process. The different versions of the paper will also remain accessible. Only after our paper has passed regular peer-review, will it be listed in the major indices, such as PubMed.

Above and beyond the openness of the publication process, we also decided to pioneer a technology we should have developed many years ago. In this paper, as a proof of concept, one of the figures isn’t provided by us, but we have merely sent our data as well as our R code to the publisher and the figure is generated on the fly. In the future, this will save us a tremendous amount of work: we are already setting up our lab such that our data is automatically published and accessible. The code is either open source R code from others, or made open source by us as we develop it. Hence, at the time of publication, all we need to do is write the paper and then submit just the text with the links to the data and the code together with some instructions on how to call the code to generate the figures. No more fiddling with figures ever again, once this becomes the norm – just do your experiments, write the paper and hit ‘submit’.

Another advantage of that method is that not only does it save us time and effort, it also means that reviewers and other readers only need to double click the figure to modify the code and, e.g. look at another aspect of the data. In this version we just let the user decide on different ways to plot the data, but it shows the principle behind the implementation.

In order to both broaden the database for the phenomenon studied here, and showcase the power of the technology, we are also inviting other labs to contribute their Canton S Buridan data to see how it compares to the data we have. As of now, we show data from only one additional lab in a static figure, but in future versions of the paper, we will have a dynamic figure that gets updated as new data gets uploaded by users.

I’m not deluding myself that our little paper will have much of an impact either scientifically or technologically. Of course we’d all be more than delighted if it would, but at the very least, we’re showing that even with very limited resources and some creativity, you can accomplish something that, extrapolated to a larger scale, would be transformative.

Like this:

Like Loading...
Posted on July 30, 2014 at 13:15 56 Comments
Jun26

Why use fruit flies to study a gene involved in language?

In: news • Tags: Drosophila, FoxP, operant

ResearchBlogging.orgThis is the story behind our work on the function of the FoxP gene in the fruit fly Drosophila (more background info). As so many good things, it started with beer. Troy Zars and I were having a beer on one of the ICN evenings, I think it was in Vancouver in 2007. I had recently learned about the conserved role of FoxP2 in songbirds, out of one of the labs in Berlin, where I was based at the time. As already Skinner had proposed that language learning was an operant conditioning process and as song learning in birds was also often characterized as a form of operant learning, I wondered aloud how cool it would be if flies had such a gene and we could test them in our operant learning experiments as well. This would really suggest a possibility to unify vocal learning and operant learning in a single biochemical pathway.

“Shhh, don’t tell anybody, I have them!”, Troy replied.

It turned out that at the time, the FoxP-related gene in flies had not been annotated in the genome yet. Troy had performed the database sequence search and performed some preliminary molecular  experiments to make sure he really had the dFoxP gene. He even noticed that one sequence just downstream of the gene was in fact the last exon of the gene and that the automated curation process had missed this detail. He had ordered three fly lines each with one insertion mutation in or near this last exon and replaced the rest of the genome with the background of a control strain in his lab, “Canton S”. He even had done some preliminary behavioral tests on the courtship of these ‘cantonized’ lines.

We agreed that he would send me the lines and I would test them in our operant conditioning experiments. When I received the lines I was excited and started testing them as soon as they hatched. The first day was a disappointment: I had randomly chosen one of the three lines and tested a few of them along with Troy’s Canton S control flies. Both groups learned just fine. I was almost going to give this up as a beautiful hypothesis slain by an ugly fact, when I reminded myself that one should not rest one’s conclusions on a single day of testing. So I decided to test them at least for the rest of the week and then see what happened. On each of the following days, learning scores were consistently zero, such that, by the end of the week, the combined score of all the animals didn’t look all that good any more. The final results, several weeks later, can now be seen in Fig. 3 of our paper.

In order to find out what was going on with FoxP gene expression in these mutants, FoxP2 specialist Constance Scharff in the neighboring department of behavioral biology in Berlin teamed up with Jochen Pflüger in our department to apply for some funding for this project. The results of this fruitful collaboration can be seen in Figs. 2 and 5c. It was then graduate student, now postdoc Ezequiel Mendoza who did all the molecular biology.

Because FoxP genes are transcription factors regulating the expression of other genes during brain development, I was curious if the brain structure of these flies were any different from control flies. Jürgen Rybak was a specialist on insect brain anatomy and had recently left the department in Berlin. I gave him two vials of flies and asked him if he could quantify the main brain regions ‘on the side’ of his regular day job at the Max Planck Institute. Jürgen worked tirelessly and after many months of slogging came up with some very subtle results, now depicted in Fig. 6. To this day, we’re still unsure what these data really ought to tell us, other than that FoxP also in flies is involved in brain structure and development. This figure, btw, has two examples of fly brains that you would be able to rotate and zoom in and out of in 3D, if PLoS would be able to support 3D PDF files. As they don’t, you have to dig into the supplemental information, go to Fig. S3 there and open the PDF file. Then, if you click on any or both of the brains, you can use the 3D functionality we have embedded there.

Another parallel between vocal learning and operant conditioning is the observation that training/practicing for extended periods of time leads to a stereotypization or automatization of the behavior trained. In songbirds, this is called “crystallization” of the song, in operant conditioning it is called habit formation. Flies show this sort of habit formation as well, but we had no idea if it was the same process as the much quicker form of learning we had tested before, or if this was something that looked similar on the behavioral level, but the neurobiology was quite distinct. So the postdoc in our lab at the time, Julien Colomb, set out to not only replicate my own results with the FoxP mutants, but to also test the much trickier habit formation experiment in the mutant flies, using his own, new machine. He found that indeed these flies were also deficient in habit formation, replicating not only my results, but also strongly suggesting that our form of operant learning (which we have termed “operant self-learning”) and habit formation are likely biochemically the same process.

Because having just one mutant allele (the other two lines did not fly well enough for our experiments) is not enough evidence to conclusively tie these phenotypes to the FoxP gene, we took advantage of the fact that the last exon, where the insertion mutation resided, was initially listed as a separate gene in the databases. The method of choice was to attempt to knock-down dFoxP expression using RNAi. We wanted to mimick the original mutation as specifically as possible, so we wanted only to knock down the dFoxP isoforms which included the last exon. Usually, this is not possible, without designing your own transgenic constructs. However, because the last exon was its own gene in the databases, there was a line that contained an RNAi construct for just the last exon. So I ordered that line, crossed it to another line that would express the RNAi construct specifically in all neurons and tested the flies. This was when I started to get very, very nervous: the flies with the RNAi-targeted dFoxP also did not learn in operant self-learning! In most of my research career, things have turned out the opposite of what I had expected, in almost all of my experiments. Did I do something, subconsciously, to bias my results? After all, this was now the second experiment in a row that turned out just as one would dream of! I couldn’t find anything wrong, measured yet more flies in different crosses just to be sure, but the result remained the same. As long as nobody else finds out what went wrong, I have to accept the data and acknowledge that one of my hypotheses turned out to be correct for once after all. That’s a very weird and strangely disconcerting feeling.

Everything had been smooth sailing so far, so clearly, there had to be something that would put a spoke in our wheels. We decided to submit the manuscript to PNAS just before the winter break in 2011, on December 21, with the title “Drosophila FoxP is necessary for operant self-learning”. One reviewer liked it, but the other summarized their review by

the data are made to bear the weight of an elaborate hypothesis, and they are literally crushed by it, like a tiny matchstick house beneath a bowling ball.

The reviewer appeared particularly miffed by something he hadn’t heard of before:

to the best of my knowledge the distinction between operant self-learning and operant world-learning is one that is not widely acknowledged in the learning and memory community. Indeed, the senior author and his colleagues of this manuscript may be the only people in the world who hold that there is such a distinction.

We dared to submit something that wasn’t yet part of the wider learning and memory community! How preposterous! It appears that this reviewer was not very keen on new discoveries – only processes and observations “widely acknowledged by the community” should be published. Novel, groundbreaking or controversial findings need not be submitted. A very strange perspective to take for a scientist indeed, especially since at the point of submission we already had two peer-reviewed papers with this distinction published. The reviewer struck the death blow to the paper:

If one strips away the elaborate theoretical superstructure of this paper, for which the evidence is, I believe, shaky at best, what is one left with? Basically, the authors have shown that a FoxP fly mutant performs abnormally on one operant conditioning task, but normally on another. This is well and good, but not, to my mind, sufficient for a PNAS paper.

And of course, the classic smack down that will kill any paper, but at least to my mind is essentially unethical: ask for virtually every single potential experiment to be made:

To accept the authors’ ideas, we would have to know whether or not FoxP mutant flies are deficient in other forms of learning. In other words, what is the evidence that the learning abnormalities in these flies are confined to “self-learning” motor tasks? Remarkably, the authors do not test their mutants on any of the other learning tasks available for Drosophila, among which are olfactory conditioning, habituation and conditioning of proboscis extension, and nociceptive sensitization. Knowing the full range of learning dysfunctions of FoxP mutants would help clarify the important issue of whether or not this class of genes is devoted to operant self-learning as the authors believe.

Not one, not two, but 4 different behavioral experiments is what we should do, topped up by “the full range of learning dysfunctions”. If any reviewer for any of the papers I’m handling as an editor ever should request anything like that from one of the authors, I will definitely report the person to his department for unethical behavior: asking for every experiment on the planet is just something that gives away bias. But that’s not even enough. On top of doing every single behavioral learning experiment the world knows for flies, we should also do a full-scale molecular genetic interaction study:

It would also be important to know whether or not FoxP mutants have defects in PKC signaling. Does Drosophila FoxP even have a PKC binding site? Can overexpression of PKC isoforms in flies rescue the learning deficits of the FoxP mutants?

Asking for essentially a decade or two of experiments in a review for a single paper is clearly unethical in my books and I  notified PNAS of that. Obviously, other than acknowledgment of receipt, I never heard of them again. I guess there may be something to the talk and gossip about PNAS…

We decided to instead submit our work to PLoS One, after we tried and tackled the more reasonable suggestions by the other reviewer. In April 2012, we’re ready and submitted the revised manuscript. In May, we got a ‘major revisions’ notification, asking us to essentially replace the semi-quantitative PCR with quantitative PCR when we test for the effectiveness of the RNAi procedure. I thought this was a good and reasonable suggestion as I had started to become suspicious that the regular PCR was quite liable to false bands: both missing and appearing bands.

It took us more than a year to get the qPCR data, evaluate them and digest them: it had turned out that the RNAi had not knocked down the dFoxP mRNA in any way we would be able to detect. Thus, it was possible that the behavioral phenotype was due to another gene, serendipitously affected by the RNAi construct (a so-called ‘off-target effect’). This left us with just one allele and an inconclusive RNAi result. I researched the literature as I had recently been teaching the RNAi method to undergraduates and had taught that under some conditions, the mRNA is not degraded, but sequestered. Now what were these conditions again? In brief, if the match between RNAi construct and target region of the gene is not 100%, the mRNA is rather sequestered, than chopped up. So we (i.e., Ezequiel) cloned and sequenced this region for all of our lines and did indeed find several mismatches in the target region. Lucky to have an explanation for a phenotype without mRNA knockdown, but nervous that it wouldn’t be sufficient, we re-submitted in August 2013.

As we feared, the reviewers did not find our explanations sufficient and again sent us the manuscript back. In the meantime, I had moved to Regensburg from Berlin and discussed the paper with colleagues at lunch. One of them, José Botella-Munoz, suggested I try a classic genetic experiment from the 20th century: cross the mutants and the controls over a deficiency that spans the dFoxP locus! I ordered the flies, crossed and tested them and to our relief, the results confirmed that the mutation in the FoxP genes was the most likely culprit for the learning phenotype. Find these results now in Fig. 4 of the paper. In addition to these results, we also changed the title to “Drosophila FoxP mutants are deficient in operant self-learning”, as we still can’t fully rely on the RNAi data.

During the revision process, what all of us secretly feared happened: another behavioral fly FoxP paper was published. These authors had not done any learning experiments, but had found an involvement of FoxP with motor problems of the flies. However, they relied heavily on the RNAi method, but without using qPCR, only the regular PCR which was the reason our manuscript was initially rejected at PLoS. So that was something of a shock, but not too bad, at least not for the authors (other colleagues got hit worse by some of the results in this paper). Just days before we submitted what would become our final version, Science published a paper on FoxP in flies that flew in the face of everything published about FoxP genes so far, but apparently without exchanging the genetic background of the mutants, without meticulous controls to rule out motor defects and without even attempting to test for the efficiency of the RNAi procedure. There is a more in-depth treatment of this paper in my previous post. Either way, we have several posters and blog posts to show that we were on dFoxP already several years ago.

Now, finally, more than 2.5 years after the initial first version of the manuscript was submitted to PNAS, our work is finally published in PLoS One and it will be exciting to see if now others can find the mistake we couldn’t find and show us why this is all wrong 🙂


Mendoza, E., Colomb, J., Rybak, J., Pflüger, H., Zars, T., Scharff, C., & Brembs, B. (2014). Drosophila FoxP Mutants Are Deficient in Operant Self-Learning PLoS ONE, 9 (6) DOI: 10.1371/journal.pone.0100648

Like this:

Like Loading...
Posted on June 26, 2014 at 15:26 9 Comments
Jun25

The Drosophila FoxP gene is necessary for operant self-learning

In: own data • Tags: FoxP, language, operant, self-learning

See this post with the associated press releases on brembs.net.

The Forkhead Box P2 (FOXP2) gene is well-known for its involvement in language disorders. We have discovered that a relative of this gene in fruit flies, dFoxP, is necessary for a type of learning called operant self-learning, which resembles some aspects of language learning. This discovery traces one of the evolutionary roots of language back more than half a billion years before the first word was ever spoken. Intriguingly, dFoxP-function also differentiates between self and non-self, a key process malfunctioning in autism and schizophrenia disorders, in which FOXP2 has also recently been implicated. Finally, dFoxP is also important for habit formation, a common animal model for addiction.

Even though language is so much a part of what it means to be human, the evolution of this strikingly singular trait is still clouded in mystery. Genetic disorders with language impairments are a particularly effective route to uncovering the biological roots of language. Most prominently, one mutation in the FOXP2 gene appears to affect language acquisition in afflicted patients, without other obvious impairments (1). This gene is one of four members of the FoxP gene family which have evolved in vertebrate animals from a single ancestral FoxP gene by serial duplications. In invertebrates, these duplications never took place and thus the single currently existing invertebrate FoxP gene can serve as a model for studying the function of the extinct, ancestral gene (Fig. 1).

Fig. 1: Using operant conditioning to test invertebrate FoxP function. From the single ancestral FoxP gene, four different genes have evolved in the vertebrate lineage through serial duplications, while invertebrates have retained a single copy of the gene. In an operant feedback loop, spontaneous actions are followed by a given outcome as a consequence. Depending on that outcome being desirable or not, the frequency of the action increases or decreases in the future. For instance, vocalizations of a human infant are followed by the perception of the resulting babbling. The deviation from the intended articulation modifies future vocalizations until language is formed. Similarly, in songbirds, the perceived difference  between the juvenile bird’s (right) own subsong and the memorized song from an adult tutor (left) modifies future vocalizations until the species-specific adult song is produced. In mice, balancing in the rotorod experiment is followed by eventual falling, which provides the feedback to improve subsequent balancing movements. All three examples have been shown to be dependent on normal FoxP2 function. Analogously, we have tested fly FoxP function by tethering flies and measuring their turning attempts in stationary flight. Some turning attempts (e.g. to the right) are followed by a punishing heat beam, others (e.g., to the left) are rewarded with turning the beam off. Continuous feedback modifies the fly’s turning attempts towards the direction where the heat is off.

Fig. 1: Using operant conditioning to test invertebrate FoxP function. From the single ancestral FoxP gene, four different genes have evolved in the vertebrate lineage through serial duplications, while invertebrates have retained a single copy of the gene. In an operant feedback loop, spontaneous actions are followed by a given outcome as a consequence. Depending on that outcome being desirable or not, the frequency of the action increases or decreases in the future. For instance, vocalizations of a human infant are followed by the perception of the resulting babbling. The deviation from the intended articulation modifies future vocalizations until language is formed. Similarly, in songbirds, the perceived difference between the juvenile bird’s (right) own subsong and the memorized song from an adult tutor (left) modifies future vocalizations until the species-specific adult song is produced. In mice, balancing in the rotorod experiment is followed by eventual falling, which provides the feedback to improve subsequent balancing movements. All three examples have been shown to be dependent on normal FoxP2 function. Analogously, we have tested fly FoxP function by tethering flies and measuring their turning attempts in stationary flight. Some turning attempts (e.g. to the right) are followed by a punishing heat beam, others (e.g., to the left) are rewarded with turning the beam off. Continuous feedback modifies the fly’s turning attempts towards the direction where the heat is off.

Studies on FOXP2 patients revealed apraxia, i.e., the inability to articulate words and sentences, as one major symptom. Evidence from songbirds and transgenic mouse models seems to confirm the suspicion that the function of FoxP2 might be found in the speech component of language (1). More than fifty years ago, the behaviorist B.F. Skinner proposed that language might be acquired through an operant learning process (2): the first more or less random utterances (babbling) of infants are rewarded by their parents and correct utterances more so than incorrect ones. Moreover, just as imitating any movements, the ability to correctly imitate the words of others might be inherently rewarding. Eventually, the infant learns to correctly speak the words required to communicate their needs and affections.

Inspired by the possibility to test for one of the evolutionary roots of language in an invertebrate animal, we used a learning experiment in the fruit fly Drosophila which paralleled the operant concept proposed by Skinner: the tethered animals first produce more or less random behaviors (including turning attempts, left or right) and the experimenter rewards only designated ‘correct’ ones until the animal is spontaneously generating predominantly ‘correct’ behaviors (e.g. left turning attempts; Fig. 1). Importantly, we also used a control experiment, in which the animals’ behavior not only affected whether they would receive the reward or not, but also which color their environment was. Previous results had shown that in this control situation flies tend to learn more about the coloration of their environment than about their own behavior (3). If the function of dFoxP in flies were analogous to that of FOXP2 in humans, we would expect it to be necessary for the first experiment (‘operant self-learning’), but not for the second experiment (‘operant world-learning’).

Ever since Skinner’s proposal, these kinds of experiment had been discussed, but until now they have not been technically feasible. In his critique of Skinner’s proposition, the linguist Noam Chomsky dismissed the idea of operant experiments conceptually paralleling language acquisition as “mere homonyms, with at most a vague similarity of meaning” (4).

In order to be able to attribute any effect of our manipulations in the flies to the dFoxP gene, we used two different strategies. In the first, we tested flies with a mutation in the dFoxP gene in operant self- and world-learning. In the second we used the same two experiments to test flies in which we had experimentally targeted the dFoxP gene such that its expression was reduced. Both methods yielded essentially the same result: dFoxP is necessary for operant self-learning but not for operant world-learning, lending support to the hypothesis that operant self-learning may be one of the evolutionary ancestral capacities which had to exist in order for language to be able to evolve (i.e., an exaptation).

Another parallel between operant and language learning is the fact that prolonged practice leads to an automatization of the movements required. Only when a language is new do we need to think about the pronunciation and articulation of words and sentences. Once we are fluent, we only need to articulate our thoughts. Similarly, other movements can be trained with feedback until they become automated. Riding a bike, writing, tying shoe-laces, etc. are all examples of such automatic behaviors called skills or habits. If the learning mechanism for which dFoxP is required constitutes an exaptation for language acquisition and the speech component of language is a special form of a skill or a habit, then dFoxP mutant flies should be deficient in habit formation. To test this hypothesis, we used dFoxP mutant flies in a prolonged operant world-learning paradigm known to induce habits (5). Further corroborating our hypothesis, these mutant flies showed a severe deficit in habit formation.

In vertebrate animals, mutations in the FoxP2 gene leads to alterations in the brain structure of the affected individuals (1). This is thought to be due to the ability of FoxP genes to alter the expression of other genes, directly involved in brain development. To test if the fly dFoxP gene also is involved in brain development, we reconstructed the three-dimensional structure of the brains of flies with a mutated dFoxP gene in the computer. Using computer-assisted volume analysis, we discovered alterations in the fly brain structure which were too subtle to spot with the human eye, even at large magnifications. These results indicate that in flies as in vertebrate animals, FoxP genes may act as gene regulators during brain development.

 

Taken together our results provide evidence for a structural and functional conservation of FoxP genes since the split between vertebrate and invertebrate animals. This ‘deep’ homology spans vastly different brain organizations.

Source: Mendoza E, Colomb J, Rybak J Pflüger H-J, Zars T, Scharff C, Brembs B (2014): Drosophila FoxP mutants are deficient in operant self-learning. PLoS ONE: 10.1371/journal.pone.0100648

Raw data: Mendoza, E; Colomb, J; Rybak, J; Pflüger, H-J; Zars, T; Scharff, C; Brembs, B (2013): Drosophila FoxP molecular, anatomical and behavioral raw data. figshare. https://dx.doi.org/10.6084/m9.figshare.740444

 

REFERENCES.

  1. Bolhuis JJ, Okanoya K, Scharff C (2010) Twitter evolution: converging mechanisms in birdsong and human speech. Nature Reviews Neuroscience 11:747-759.
  2. Skinner BF (1957) Verbal Behavior (Copley Publishing Group).
  3. Brembs B, Plendl W (2008) Double dissociation of PKC and AC manipulations on operant and classical learning in Drosophila. Current Biology 18:1168-1171.
  4.  Chomsky N (1959) A Review of B. F. Skinner’s Verbal Behavior. Language 35:26-58. Available at: https://cogprints.org/1148.
  5. Brembs B (2009) Mushroom bodies regulate habit formation in Drosophila. Current Biology 19:1351-5.

Like this:

Like Loading...
Posted on June 25, 2014 at 23:06 6 Comments
Jun24

No need to only send your best work to Science Magazine

In: researchblogging • Tags: conditioning, Drosophila, FoxP, GlamMagz, learning, neurogenetics, neuroscience

ResearchBlogging.orgThe data clearly show that publications in Cell, Nature or Science (CNS for short), on average, cannot be distinguished from other publications, be it by methodology, reproducibility or other measures of quality. Even their citation advantage, while statistically significant, is so small that it is practically negligible. Regardless of all the data, individual examples sometimes serve to illuminate the data and drive some facts home. There are more or less extreme examples of CNS publications not meeting the expectations to which they are commonly held. It’s needless to mention all the examples that went through the media, but I would instead cite a paper from my field (where I feel reasonably competent) as an instance of an extreme case of an obviously flawed paper making in into the journal Nature.

However, one doesn’t even need to look at the extreme cases. Let’s look at a paper in Science Magazine I actually quite like. It’s an interesting topic, a beautiful experimental paradigm and the results in the wildtype flies are truly exciting. Their mutant and transgenic results, if correct, are tantalizing as they run counter to essentially all available literature on the gene in question. I personally know and like the senior author of the paper, Gero Miesenböck, and appreciate all of his work, including this publication. All I’ll try to do here is to make the case, that this paper is an average paper in terms of the level of evidence provided to support the claims made by the authors. Not obviously worse than most papers in our field, but definitely nowhere near the level of evidence those colleagues ignorant of the data would assume to be required for a CNS paper. All papers have strengths and weaknesses, some have more uncertainty associated with their results, others less. The data suggest that there isn’t much difference between journals, on average, and I’ll try to use some details in this Science paper to exemplify the data.

The authors state in their title that the gene dFoxP influences the speed of perceptual decision-making in Drosophila fruit flies. They test the flies by first conditioning them to avoid a given odor using electric shocks. The choice test is then done in a chamber on individual flies. Two odors enter the chamber from opposite ends and exit the chamber in the middle. The fly then walks back and forth between the ends of the chamber and avoidance is measured by where the fly spends most of the time: in the half with the odor associated with electric shock, or in the half with the control odor. In the current study, the authors made the control odor increasingly similar to the one associated with shock and found that wildtype flies spent more time in the middle of the chamber, where they had to decide of whether to keep walking or make a turn, the more similar the odors were. That is definitely an exciting finding, suggesting that more similar odors require more processing time for an avoidance decision than less similar odors.

In order to get a handle on the biological processes underlying this interesting phenomenon, the authors have conducted a screen of genetically modified flies, looking for mutants with longer decision times than the wildtype flies. One of the candidates that showed up in the screen, is the Drosophila orthologue of a gene involved in language in humans, FoxP. It seems hard to understand what a gene generally thought to be involved in motor learning has to do with this perceptual task, but data are data, unless there are alternative explanations. There are three main concerns that raise the suspicion that the involvement of dFoxP in this task may not be as straightforward as suggested in the title of this paper.

  1. It s common practice to homogenize the genetic background of mutants with that of the control strain to which the mutants are compared. This is done to exclude variations other than in the gene of interest to be responsible for any differences. However, the authors do not state whether and if so for how many generations the mutant flies were outcrossed to the proper wildtype genetic background. Given that the flies came from a screen, it is unlikely that all strains tested in the screen were outcrossed. For instance, the alleles described in the paper show significant lethality which disappears when outcrossed for six generations, suggesting that their general level of fitness and health appears to be affected by their genetic background. It is thus possible that factors in the genetic background and not dFoxP may be responsible for the phenotype. Given the track record of the Miesenböck lab, it is unlikely they did not outcross the flies, but at the very least, the number of generations for which the outcrossing has taken place would have to be listed in the paper.
  2. A common procedure to test for genetic background effects, in addition to outcrossing which can reduce but not eliminate genetic background effects, is to use an additional method to manipulate the gene in question. The authors used RNA interference (RNAi) to attempt to down-regulate the expression of dFoxP. It is common practice to use quantitative PCR to test the effectiveness of this method, as a) RNAi usually does not abolish gene expression completely and b) it is known to have off-target effects, i.e., it may affect the expression of other genes besides/instead of the target gene. The line used in this study comes from a collection known to have problems with off-target effects, so testing effectiveness is especially important. However, no evidence that such a test was performed is described. Below is what such data would look like, if you were to order all RNAi constructs currently available, drive their expression in all neurons (with nSyb-GAL4) and then have an undergraduate student run a whole slew of qPCR reactions to compare the expression (in fly heads) of all three dFoxP isoforms to the respective control strain (many thanks Joel and postdoc Axel for all that work!): No effect!
    relative expression of dFoxP isoforms in six different RNAi lines

    relative expression of dFoxP isoforms in six different RNAi lines

    I cannot emphasize enough that the graph above represents very preliminary data. It’s data from one undergraduate student who had prior experience with qPCR before coming to our lab, but who did not do this work full-time. There definitely need to be more biological replicates and some additional drivers need to be used (e.g. an actin driver to drive RNAi in all cells). On the other hand, the results match my own qPCR results with these lines and results from another laboratory that we collaborate with. So while I would not exclude that we may see significant reduction in dFoxP expression with some technical tweak, the currently available data suggest that none of the available dFoxP RNAi lines significantly knocks down dFoxP expression, including the one tested by DasGupta et al. (15732V). Our work also points to a reason why none of the lines seems to have a detrimental effect on dFoxP expression: we have localized polymorphisms in the dFoxP gene which could bias the RNAi process towards sequestration, rather than degradation of the mRNA, potentially explaining some of the very high values in the plot above. Thus, it is possible (and we consider it even very likely) that some or all of these lines actually do affect dFoxP expression in the way intended. However, we currently have little possibility to ensure it’s not an off-target effect after all.
    I’d thus tentatively conclude that the phenotype DasGupta et al. have discovered can indeed be ascribed to dFoxP action in this task. However, at this point, there is insufficient data to make that statement with any reasonable certainty (no matter how likely I personally find that to be the case!).

  3. Finally, previous work on FoxP both in other animals and in flies (including our own paper coming out tomorrow, watch this space after 5pm EST on June 25) suggests an involvement of dFoxP in motor learning and perhaps also (consequently?) motor coordination. In fact, our work shows that learning about external stimuli is fine in FoxP mutant flies. It is thus critical to make sure that any animals showing a deficit in this task can perform the required movements (walking, turning, starting, stopping) accurately. The authors here attempted to ensure this by measuring the time spent in sections of the chamber other than the decision section in the middle. However, one would assume that flies will mostly turn or sit at the ends of the chamber and in the decision sector, with fewer turns and pauses in the sectors in-between. Thus, by evaluating sectors in which the flies mainly walk straight, the authors may be underestimating the contribution of turns and no-movement episodes to the decision-making process. If that is correct (and the brief explanations in the methods section make it very difficult to be entirely sure about how precisely and on which sectors they have performed the calculations), dFoxP-manipulated flies may either have a problem starting to walk from a stop, or have difficulty turning in the chamber and the authors would interpret this as an increased decision-time, rather than a motor problem.

In summary, it may well be that dFoxP entails all the functions reported by DasGupta et al. and this would indeed make for a giant head-scratcher about the (ancestral) functions of FoxP, given all the mountain of work involving it in motor, rather than perceptual processes. However, at this point, some more work (at least some more detailed method explanations!) is required to be reasonably certain about the existence of such a problem. Personally, I do think that the phenotype is due to the authors’ FoxP manipulations, but I may be wrong, the evidence is not strong enough either way. I do have my doubts, however, about the phenotype not being due to motor defects, but I could be wrong there as well, as the data is inconclusive. At this point, we need more experiments and/or data analysis to show who is wrong. Again, I still like the paper, it’s the kind of work that drives our field forward.

Speaking more generally: there is no need for any researcher to wait (perhaps indefinitely?) until they have the perfect data set, with the unambiguous results and the foolproof conclusions. Just send your plain decent work to CNS magazines as well! If they publish it, you will get a job and nobody but some lone blogger will ever ask about the content of that paper ever again.


DasGupta, S., Ferreira, C., & Miesenbock, G. (2014). FoxP influences the speed and accuracy of a perceptual decision in Drosophila Science, 344 (6186), 901-904 DOI: 10.1126/science.1252114

Like this:

Like Loading...
Posted on June 24, 2014 at 15:04 19 Comments
Jun17

Your university is definitely paying too much for journals

In: science politics • Tags: costs, journals, pricing, publishers, publishing, subscriptions

There is an interesting study out in the journal PNAS: “Evaluating big deal journal bundles“. The study details the disparity in negotiation skills between different US institutions when haggling with publishers about subscription pricing. For Science Magazine, John Bohannon of “journal sting” fame, wrote a news article about the study, which did not really help him gain any respect back from all that he lost with his ill-fated sting-piece. While the study itself focused on journal pricing among US-based institutions, Bohannon’s news article, where one would expect a little broader perspective than in the commonly more myopic original papers, fails to mention that even the ‘best’ big deals are grossly overcharging the taxpayer. Here is the figure of the article, apparently provided by the PNAS authors:

Journal subscription prices

This graph shows that some universities pay more for subscriptions than others. I’m not sure what exactly -130% is supposed to mean. I take it that UMass didn’t receive money from Springer, but still paid $168,224. So I take this graph to mean that there are differences of up to 200% between what libraries are paying publishers, i.e, one university may pay up to 200% on top of what another library is paying for the same content, e.g. when one pays one million, another has to pay three. I’m not entirely sure that this is the correct reading of the Y-axis, but it’s the best I can do for now.

Being charged 200% more than other libraries for the same service may hurt, but consider what we would be paying if we wouldn’t use publishers, but instead published all our papers in a system like SciELO:

Comparison between legacy subscription publishers and SciELO in US$ prices per article published.

Comparison between legacy subscription publishers and SciELO in US$ prices per article published.

According to a Nature article citing Outsell, we currently pay US$5,000 per article to prevent public access to it, while the overall cost of a publicly accessible article in SciELO is only US$90. Try to explain that to a taxpayer on the street: you pay $5,000 for each article you’re not allowed to read, instead of just $90 for each article you could read. In the light of such numbers, it is a sign of a truly warped perspective when people can still discuss a few percentage points more or less for what they pay to block public access to research. Because this is what libraries do by paying subscription fees: they pay to block public access to research.

Be that as it may, if I were to calculate any percentages from these differences, I could say that subscriptions are in excess of 5000% more expensive than SciELO or that SciELO would only cost institutions 1.8% of what they are currently paying for the same service, or that we are overpaying legacy publishers by 98.2%. So either way you see it, we could pay less than 2% of the current cost or are currently paying more than 5000% too much – compared to these figures, the 200% seems like a totally negligible number to me. In the words of Science Magazine: no matter what your university paid for subscriptions, they definitely got a horrible deal – even if it was the best deal in the country.

Nevertheless, given the effective distraction machine that Science Magazine is turning into, I expect people will discuss the irrelevant 200% much more extensively, than the crucial 1.8% or 5000%.

What we should instead discuss is the following:

Why are we paying to block public access to research, when we could save billions by allowing access?

Like this:

Like Loading...
Posted on June 17, 2014 at 14:44 104 Comments
May08

If you comment online, you’re on stage

In: news • Tags: contrarians, Frontiers, science denialism, unpersuadables

Apparently, the outrage of science denialists over their exposure in a recent psychological paper shows no signs of abating. It was denialists’ complaints and legal threats of libel/defamation suits that started the investigation of the paper and also in the comments to my post announcing my resignation as editor for Frontiers, the denialists complained that their public blog comments were used in a scientific paper. Blog responses by Henry Markram, editor-in-chief of Frontiers, confirmed my decision to resign: essentially, he sided with the denialists and opined that public comments were not fair sources for psychological study.

Let’s stop for a moment and ponder if there are some analogous offline scenarios to taking a public online comment and analyzing it.

Literature springs to mind: every literature department at every university takes published words and analyzes them. Apparently, Markram and the science denialists think this should all be abolished, or at least that it is a questionable practice which ought to be better regulated. Perhaps they think that literature departments should study literature without mentioning the authors? Once literature departments are up for grabs, why stop there? Why not prohibit political analysts from telling the public about their politicians? Obviously, you’d start with those analysts unfavorable of the ruling politicians. Why not fire all music critics from newspapers and magazines (those that still have such employment, that is)? Heck, isn’t “American Idol” or “America’s Next Topmodel” and all the other casting shows exactly analogous: taking a public performance and scrutinizing it publicly? It’s perhaps worth reminding everybody that online comments are public performances, like it or not.

In essence, what Lewandowski et al. have done in their ‘recursive fury’ paper is in more than a few ways akin to what the jury does in casting shows. They’ve been the jury when the science denialists went up on stage to sing and dance. If that had actually happened offline, maybe Lewandowski et al.’s jury comments might have gone like this:

“When you sing, it sounds like the quaking of a duck!”

“When you dance, you have the grace and elegance of an antelope – no, wait, what was the name of that animal with the trunk again?”

“You are seriously coyote ugly!”

After it occurred to them what fools they had made of themselves on stage, the denialists went to the TV station airing the show (Frontiers) to complain that broadcasting their embarrassing performance with the negative jury comments were defamatory. Obviously, in the real world, the TV channel people would have ROTFLTAO. In science publishing, Frontiers caved in and axed the broadcast.

Morale of the story: if you can’t take the consequences, don’t get up on stage.

Like this:

Like Loading...
Posted on May 8, 2014 at 12:57 12 Comments
Apr17

Conflicts of interest even for ‘good’ scholarly publishers

In: science politics • Tags: libraries, open access, publishers, publishing

Thinking more generally about the “Recursive Fury” debacle, something struck me as somewhat of an eye opener: the lack of support for the authors by Frontiers and the demonstrative support by their institution, UWA (posting the retracted article). Even though this might be the first time a scholarly journal caved in to legal pressure from anti-science groups, it should perhaps come as no surprise. Ugo Bardi made a very valid point when he recently wrote:

The problem, here, is […] that we are stuck with a century old model of communication: expensive and ineffective and, worse, easily subverted by special interest groups

I disagree with his suspicion that Frontiers is a Ponzi scheme, as I quite like the federated structure of the enterprise: we are thousands of scientists and our work needs to be reviewed by thousands of scientists. Any system we might come up with for scholarly communication will, by necessity, be gigantic. But his insight quoted above really deserves special attention and ought to be a thought provoker for anybody in our business.

Any publisher always has an inherent conflict of interest: whether it is the GlamMag hyping cold fusion, stem cells or the latest flashy social psychology experiment to sell subscriptions, or a fledgling start-up that sees their venture going down a legal drain or the idealistic non-profit trying to get some more papers to hire one more developer for the next great innovation: for all of them, the financial viability of the enterprise comes before science. This conflict of interest is usually not a major issue, but does come up enough to make me worry if this is really a good idea – especially in this day and age, when digital publications cost virtually nothing.

As mentioned above, our own institutions obviously do not have this conflict of interest, on the contrary, they are the reasons for the existence of professional scientists. They can host our papers, when publishers, even the ‘good guys’ like Frontiers, cannot.

Interestingly, just a few weeks earlier, Richard Poynder, after many years of covering the open access movement, had already gotten me started thinking along those lines, noting:

I believe the movement made a mistake in allying itself with OA publishers. What it failed to appreciate is that publishers’ interests are not the same as the interests of the research community.

Another piece of evidence of these conflicts of interest is the constant struggle for the kind of licenses attached to articles by OA journals. Clearly, liberal re-use licenses are in the best interest of the one paying the bills, the tax-payer. Publishers obviously do not share these interests (neither do some authors, btw.). And so, there are constant attempts by various publishers to gain more control over our works, even if they are accessible for anyone to read.

These recent events have triggered the suspicion that maybe the entire concept of scholarly publishers is antiquated, irrespective of how open, innovative or non-profit the publisher is. In addition to the inevitable conflicts of interest, none of the publishers are seriously considering all three of our intellectual outputs: code, data and texts. They are only after our text summaries, i.e., our papers. The result being, in an age of ever sinking costs of making digital objects public, that we overpay publishers by so much, that no money is left for our institutional infrastructure serving our three output modalities. Thus, even if the conflicts of interest were not an issue, separating the fruits of our intellectual labor not only into tens of thousands of journals, but also into separate, non-interoperable silos for code, data and text makes absolutely no sense at all, given today’s technology and is outright insanity given tomorrow’s technology.

Maybe I should resign from all my volunteer positions with publishers?

Like this:

Like Loading...
Posted on April 17, 2014 at 18:22 10 Comments
Apr09

Recursive fury: Resigning from Frontiers

In: news • Tags: contrarians, delusionals, unpersuadables

Last month, I was alerted to an outrageous act of a scientific journal caving in to pressure from delusionals demanding the science about their publicly displayed delusions be hidden from the world: the NPG-owned publisher Frontiers retracted a scientific article, with which they could not find anything wrong: The article

attracted a number of complaints which were investigated by the publisher. This investigation did not identify any issues with the academic and ethical aspects of the study. It did, however, determine that the legal context is insufficiently clear and therefore Frontiers wishes to retract the published article.

Essentially, this puts large sections of science at risk. Clearly, every geocentrist, flat earther, anti-vaxxer, creationist, homeopath, astrologer, diviner, and any other unpersuadable can now feel encouraged to challenge scientific papers in a court. No, actually, they don’t even have to do that, they only have to threaten court action and publishers will cave in and retract your paper.

As if we needed any more evidence that publishers are bad for science.

Now even the supposedly “good guys” show that the are not really on the side of science. Instead of at least waiting for a law suit to be filed and perhaps at least attempting to stand their ground (as Simon Singh did), they just took the article down in what can only be called anticipatory obedience. This is no way to serve science.

A week or two ago, I talked with a Frontiers representative on the phone and she explained a few things to me which prompted me to read the paper in question, so I could make up my own mind. After reading the paper, any of the attempted explanations on the phone rang hollow: I’m certainly not a lawyer, but if taking publicly posted comments and citing them in a scientific paper, discussing them under a given hypothesis which has a scientific track record and plenty of precedence constitutes a cause for libel or defamation lawsuits, it is certainly the law and not the paper which is at fault. It is quite clear, why the content of the paper may feel painful to those cited in it, but as long as “conspiracist ideation” is not an official mental disorder, I cannot see any defamation. If you don’t want to be labeled a conspiracy theorist, don’t behave like one publicly on the internet. Therefore, after reading the paper, in my opinion, Frontiers ought to have supported their authors just as their home institution (UWA) is supporting them as their employees.

As the Frontiers representative did not disclose any details and what she was able to disclose was both very general, hence not very convincing, and I promised not to disclose even that, one can only speculate what the motivations and considerat1ions might have been at Frontiers as to why they decided to throw their authors under the bus.

Clearly, if legal problems are cited, it’s always money that’s at stake, I’d be surprised if this were controversial. I have heard through the grapevine that Frontiers apparently may have felt some pressure recently, to make more money, to publish more papers. I was told that they have sent out literally millions of spam emails to addresses harvested from, e.g. PubMed, soliciting manuscript submissions. Obviously, a costly libel or defamation suit in the UK would not have been a positive on the balance sheets.

Alas, as much fun all of this speculation may be, it is not really relevant to my conclusion: Frontiers retracted a perfectly fine (according to their own investigation) psychology paper due to financial risks for themselves. It can only be seen as at best a rather lame excuse or at worst rather patronizing, if Frontiers were to claim to be protecting their authors from lawsuits by removing the ‘offending’ article. This is absolutely no way to “empower researchers in their daily work“. In the coming days I will send resignation letters to the Frontiers journals to which I have donated my free time for a range of editorial duties. Obviously, I will complete the tasks I have already started, but I will not accept any new tasks at Frontiers – at least not until they show more support of their authors.


P.S.: I should perhaps add that the reason I supported Frontiers almost since its inception was that they were and in many respects still are among the most innovative publishers out there and that they drive our communication system away from the entirely antiquated status quo. Of course, Frontiers still serves this particular function very well. My criticism very specifically targets this particular paper and leaves all the other positive contributions of Frontiers to our publishing ecosystem intact. I guess that much of my personal disappointment comes from the feeling of betrayal, when I felt Frontiers was on the side of researchers for so many years. I would have expected such behavior from legacy publishers, but not from Frontiers. This incident, together with several other events over the past month or two have prompted me to think more generally about my involvement with publishers and there will be another post on this topic at some point.

Like this:

Like Loading...
Posted on April 9, 2014 at 18:11 150 Comments
Mar12

FIRST: the Research Works Act all over again

In: science politics • Tags: FIRST, lobbyism, polticians, publishers, publishing, RWA

Do you remember the RWA? It was a no-brainer already back then that the 40k that Elsevier spent was well-invested: for months, Open Access activists were busy derailing this legislation, leading a virtual standstill on all other fronts. now, just over two years later, two Republican representatives introduced the Frontiers in Innovation, Research, Science and Technology (FIRST) Act. According to SPARC:

This provision would impose significant barriers to the public’s ability to access the results of taxpayer-funded research.  Section 303 of the bill would undercut the ability of federal agencies to effectively implement the widely supported White House Directive on Public Access to the Results of Federally Funded Research and undermine the successful public access program pioneered by the National Institutes of Health (NIH) – recently expanded through the FY14 Omnibus Appropriations Act to include the Departments Labor, Education and Health and Human Services.

The two sponsors of the Bill are Chairman Lamar Smith (R-TX) and Rep. Larry Bucshon (R-IN). Not surprisingly, both sponsors are backed up by publisher funding: Lamar Smith receives annual contributions from Elsevier and other publishers. Both sponsors received contributions from a large number of scholarly (primarily medical) associations that also publish their own subscription journals. Some of these contributions were on the order of several tens of thousands of dollars. Among these scholarly societies were:

Society Journal(s) published by
American Medical Association JAMA network AMA
American Society of Anesthesiologists Anesthesiology Wolters Kluwer
College of American Pathologists Archives of Pathology & Laboratory Medicine Allen Press
American College of Radiology Journal of the American College of Radiology Elsevier
Society of Thoracic Surgeons Annals of Thoracic Surgery Elsevier
American Academy of Orthopaedic Surgeons Journal of the American Academy of Orthopaedic Surgeons

The Journal of Bone and Joint Surgery

Orthopaedic Knowledge Online Journal

Highwire Press

Kent R. Anderson

AAOS

Of course, nobody knows how much influence their contributions bought these contributors, but this short list already reads like a who’s who of corporate publishers with a track record of lobbying against public access to public research. One cannot exclude that it is a pure coincidence that these two politicians with a track record of publisher contributions are now drafting a publisher-friendly legislation – and thereby doing the public a disservice.

Like this:

Like Loading...
Posted on March 12, 2014 at 23:21 28 Comments
  • Page 14 of 21
  • « First
  • «
  • 12
  • 13
  • 14
  • 15
  • 16
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,800 views)
  • Sci-Hub as necessary, effective civil disobedience (22,938 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,453 views)
  • Booming university administrations (12,903 views)
  • What should a modern scientific infrastructure look like? (11,433 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous biting in the marine snail Aplysia
Spontaneous biting in the marine snail Aplysia

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d