bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 26 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 138 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 440 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 701 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1886 downloads 0.00 KB
Download
Dec17

How free are academics, really?

In: science politics • Tags: academic freedom, DFG, funding, institutions, open access, publishing

In Germany, the constitution guarantees academic freedom in article 5 as a basic civil right. The main German funder, the German Research Foundation (DFG), routinely points to this article of the German constitution when someone suggests they should follow the lead of NIH, Wellcome et al. with regard to mandates requiring open access (OA) to publications arising from research activities they fund.

The same argument was recently made by Rick Anderson in his THE article entitled “Open Access and Academic Freedom“. When it was pointed out both in the comments on his article and on Twitter that the widespread tradition of hiring/promoting/rewarding scientists for publishing in certain venues constituted a much worse infringement, Mr. Anderson replied with a very formalistic argument: such selection by publication venue were mere “professional standards”, which, by definition, cannot impede academic freedom (even if they essentially force scientists to publish in certain venues), while only “official policies” actually can infringe on academic freedom (even if they are not actually enforced, such as many OA mandates, and thus have little/no effect on the choice of publication venue). This potential infringement is thus considered more of a threat to academic freedom than actual infringements, as long as the actual infringements due to ‘professional standards’ are not explicitly written down somewhere and labeled ‘policy’.

While one may take such a formalistic approach, I fail to see how such a position can be either valuable or convincing.

If everyone took that position, it would only mean that our institutions would make their policies less specific and called them “professional standard”. Then our institutions can fire us at will without ever touching our academic freedom: they just need to define professional standards loosely enough. Hence, there is no academic value in such a formalistic approach: an infringement of an academic freedom is always an infringement, no matter what you call it. The important aspect of such infringements (which may be unavoidable) is not whether or not they are written down as explicit ‘policy’, but that we must have very good reasons for them, such as tangible benefits for science and/or society.

I also doubt this argument will be very convincing, as it is just too plain obvious that such a formalistic approach is too far from reality to be worth seriously considering. Imagine two scholars, both working in the same field, collecting the same data, making the same discoveries and solving the same problems. One of them would feel forced to publish, against their own will, their work in a piecemeal fashion in the venues implicitly prescribed by their field on an ongoing basis, the other would decide to exercise their academic freedom and publish the exact same discoveries and solutions in one big text on their blog at the end of their career. From this example, it is clear that we already have a very tangible and real choice: either exercise your academic freedom or have a job in academia, both are incompatible.

Today, it seems unavoidable that the current society likely won’t accept the value of the “full academic freedom” of the AAUP that Rick Anderson referenced, and hence won’t tolerate us exercising it. But this society better provide some pretty darn good reasons for curtailing our ‘full’ civil rights. I can see how forcing us to share our work with the society that funded us would entail such a reason. I cannot see how forcing us to publish, e.g., in venues with a track record for fraud and errors would entail such a reason.

Like this:

Like Loading...
Posted on December 17, 2015 at 15:22 44 Comments
Dec04

How to write your grant proposal?

In: science politics • Tags: funding, grantsmanship, peer-review

Posting my reply to a review of our most recent grant proposal has sparked an online discussion both on Twitter and on Drugmonkey’s blog. The main direction the discussion took was what level of expertise to expect from the reviewers deciding over your grant proposal.

This, of course, is highly dependent on the procedure by which the funding agency chooses the reviewers. At the US-based NIH, as I understand it, reviewers are picked from a known panel, you just don’t know which individuals of the panel. This makes it comparatively easy to target this audience when writing your proposal. The German funding agency we submitted our grant to picks any reviewer, world-wide (I think the NSF in the US is similar, at least I have reviewed a few grants for them). After the review, a separate panel of peers (which may but commonly doesn’t include a reviewer) decides which grants out of the pool get funded, usually without reading the grants in much detail, if at all. In that case, it is impossible to have a clear picture of your audience.

My first grant/fellowship was funded in 2001. Before and since then, I have had plenty of rejections. I believe my overall funding rate over these almost 15 years is somewhere around 20±5%, which means I must have written just under 50 grant proposals in this time. Initially, when my proposals (and manuscripts) got rejected, it was suspiciously often with comments that revealed misunderstandings. Once the grant-reviewer even explicitly admitted that they didn’t understand what it was that I proposed to do. I then started to simplify my proposals, in my desperation I of course did what many online commenters proposed: I added what I thought was very basic knowledge. My proposals became significantly more understandable, but also significantly longer. Imagine my disappointment, when the feedback I received then was twofold: the reviewers felt insulted I addressed them at such a basic level and the funder told me my proposals were too long: “good proposals are succinct, maybe 10-15 pages total, and compelling”.

So here’s the rule then: you need to write your proposal in a way such that your grandma can understand it, without the reviewers noticing that you are insulting their intelligence and with no more than 1-2 sentences per explanation.

Honestly, I find this quite far beyond my capabilities. Instead, I have since focused on the easier task of being succinct at the expense of explaining everything. For the last ~8 years I’ve assumed that the people reading my proposals are either experts in the model system(s) I use or in the problems I study, not both. The implicit expectation is that the former don’t need to understand every motivation behind each experiment (and won’t require it, either) and the latter won’t be too concerned with the technical details of a model system they might not be all that familiar with. Until this last proposal, this has worked to the extent that even for the ~80% of rejections I received, the reviewer comments revealed neither obvious incompetence nor substantial misunderstandings. However, given the system by which reviewers are selected, it is of course impossible to know if this was due to my improved writing or due to the chosen reviewers. Moreover, the field as grown substantially and become much more popular in this time, so it simply may have been down to a larger pool of experts than a decade ago.

It is also important to keep in mind that with each submission even of the same grant, there may be different reviewers assigned. At the ERC, for instance, one of my proposals was rejected despite the reviewers being very enthusiastic about the idea, but because they questioned the feasibility of the method. In the revision, the reviewers thought the method was too established to warrant funding and the research question wasn’t all that interesting, either.

There were two very helpful comments in the Twitter discussion that I will keep in mind for future proposals, both were from Peg AtKisson, a professional supporter of grant writers:

@brembs Disagree because diverse inputs tend to lead to better outcomes. Only experts reviewing in narrow area leads to in-group bias.

— M. S. AtKisson (@iGrrrl) December 3, 2015

I agree that minimizing in-group bias is a goal worth investing in. However, this goal comes at a cost (which is an investment, I’d argue): you can’t have non-experts review and expect the author to not need more words for it. You also have to accept that if you promote non-expert review, you may annoy the experts with more verbose applications. If there are no explicit instructions, it virtually impossible to know where on this trade-off one has to land.

@brembs "We have chosen X approach because… We did not chose Y because…" Show your thinking. @drugmonkeyblog

— M. S. AtKisson (@iGrrrl) December 3, 2015

The suggestion to also explicitly mention methods that you rejected because they are unsuitable is one worth pondering over. If there are no word-limits, this sounds very compelling as it “shows your thinking” which is always helpful. It is also quite difficult to decide which ones to include, as it, again, involves the risk of insulting the reviewers (i.e., “only an idiot would have thought to use approach Y!”). Again, the instructions from the funder and experience will have to suffice, but I’ll definitely spend more time thinking about alternative, less suitable approaches next time.

Back to our particular case, the main issues can be boiled down to three criticisms. Since all of them concern the technical details of our experimental techniques, it is fair to assume that the person considers themselves competent at least on the technical/model system side of the proposal.

The first issue concerns a common laboratory technique which I have taught to undergraduate classes, which is widely used not only in our field but in all biological/biomedical research generally, for which Fire and Mello received the Nobel prize in 2006 and where all the technical details required for this grant are covered on the Wikipedia page (of course it’s also in all textbooks). Nothing beyond this basic understanding is required for our grant. The criticisms raised only make sense if the reviewer is not aware of the sequestration/degradation distinction.

The second concerns an even older technique which is also used in all biological/biomedical research, for which the Nobel was handed out already in 1984, the technical info is also on the Wikipedia page and it’s of course part of every undergraduate biology/medicine education I know of. Moreover, this technology is currently debated in the most visible places in the wake of the biomedical replicability crisis. Nothing beyond this most basic understanding is required for our proposal. The criticisms of the reviewer only make sense if the reviewer is not aware of the differences between monoclonal and polyclonal antibodies.

From where I sit, this kind of basic knowledge is what can be expected from a reviewer who picks these two methods (out of the four main grant objectives) as their target for criticism.

The third issue can be seen as a reviewer classic: the reviewer chided us for proposing a method we didn’t propose and suggested we instead use a method we already had prominently described in our proposal, even with a dedicated figure to make unambiguously clear we weren’t proposing the technique the reviewer rightfully rejected, but the one they recommended. Here, everything the reviewer wrote was correct, but so was our proposal: it paralleled what they wrote.

In summary: of our four objectives in the grant, this reviewer picked three for criticism. Two of the three criticisms lack undergraduate knowledge of very common, widespread techniques. The third issue is nonexistent, as the grant already describes, prominently, what the reviewer suggests. I will take the online suggestions and incorporate them into the revised version of the grant, but there really isn’t anything helpful at all one can take from this particular review. For me personally, at this time, this is an exception, but it chimes with what a lot of colleagues, on both sides of the pond, complain about.

Like this:

Like Loading...
Posted on December 4, 2015 at 14:48 21 Comments
Dec01

Why cutting down on peer-review will improve it

In: science politics • Tags: grants, peer-review

Update, Dec. 4, 2015: With the online discussion moving towards grantsmanship and the decision of what level of expertise to expect from a reviewer, I have written down some thoughts on this angle of the discussion.

With more and more evaluations, assessments and quality control, the peer-review burden has skyrocketed in recent years. Depending on field and tradition, we write reviews on manuscripts, grant proposals, Bachelor-, Masters- and PhD-theses, students, professors, departments or entire universities. Top reviewers at Publons clock in at between 0.5-2 reviews for every day of the year. It is conceivable that with such a frequency, reviews cannot be very thorough, or the material to be reviewed is comparatively less complex or deep. But already at a much lower frequency, time constraints imposed by increasing reviewer load make thorough reviews of complex material difficult. Hyper-competitive funding situations add incentives to summarily dismiss work perceived as infringing on one’s own research. It is hence not surprising that such conditions bring out the worst in otherwise well-meaning scientists.

Take for instance a recent grant proposal of mine, based on our recent paper on FoxP in operant self-learning. While one of the reviewers provided reasonable feedback, the other raised issues that can be shown to either be demonstrably baseless or already included in the application. I will try to show below how this reviewer, who obviously has some vague knowledge of the field in general, but not nearly enough expertise to review our proposal, should have either declined to review or at least invested some time reading the relevant literature as well as the proposal in more depth.

The reviewer writes (full review text posted on thinklab):

In flies, the only ortholog [sic] FoxP has been recently analyzed in several studies. In a report by the Miesenböck lab published last year in Science, a transposon induced mutant affecting one of the two (or three) isoforms of the FoxP gene was used to show a requirement of FoxP in decision making processes.

Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP. For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In principle, this proposal addresses important and highly relevant questions but unfortunately there are many (!) problems with this application which make it rather weak and in no case fundable.

Unfortunately, there are many problems with this review which make it rather weak and in no case worthy of consideration for a revised version of the proposal.

The preliminary work mentioned in this proposal is odd. Basically we learn that there are many RNAi lines available in the stock centers, which have a phenotype when used to silence FoxP expression but strangely do not affect FoxP expression. What does this mean?

Had Reviewer #1 been an expert in the field, they would have been aware of the RNAi issues concerning template mismatch and the selection of targeted mRNA for sequestration and degradation, respectively. For the non-expert, we explain this issue with further references in our own FoxP paper and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

I have seen no arguments why the generation of additional RNAi strains is now all the sudden expected to yield a breakthrough result.

Had Reviewer #1 been an expert in the field, they would be aware that the lines we tested were generated as part of large-scale efforts to manipulate every gene in the Drosophila genome. As such, the constructs were generated against the reference genome, which of course does not precisely match every potential strain used in every laboratory, as any expert in the field is very well aware of (explained in more detail in this blog post). Consequently, RNAi constructs directed at the specific strain used for genetic manipulation and subsequent crossing of all driver lines into this genetic background (as is the well-established technique in the collaborating Schneuwly laboratory), reliably yields constructs that lead to mRNA degradation, rather than sequestration. This discussion leaves out the known tendency of the available strains for off-target effects, compounding their problems. Dedicated RNAi constructs, such as the ones I propose to use, can be tested against off-targets beforehand.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Quite similar we learn in the preliminary result section that many attempts to generate specific antibodies failed and yet the generation of mAbs is proposed. Again, it is unclear what we will learn and alternative strategies are not even discussed.

Had Reviewer #1 been an expert in the field, they would understand the differences between polyclonal and monoclonal antibodies, in particular as the antibody technology is currently particularly hotly debated in rather prominent locations.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

The authors could consider the generation of HA-tagged Fosmids /I minigenes or could use homologous recombination to manipulate the gene locus accordingly.

Had Reviewer #1 not overlooked our Fig. 5., as well as our citations of Vilain et al. as well as Zhang et al., it may not have gone unnoticed that this type of genome editing is precisely what we propose to do.

One page 2 of the application it is stated that “It is a technical hurdle for further mechanistic study of operant self-learning that the currently available FoxP mutant lines are insertion lines, which only affect the expression level of some of the isoforms. ” This is not true! and the applicant himself states on page 11: “However, as the Mi{MIC} insertion is contained within a coding exon which is spliced into all FoxP isoforms, it is likely that this insertion alone already leads to a null mutation at the FoxP locus.” Yes, by all means the insertion of a large transposon into the open reading frame of a gene causes a mutation!!!! Why this allele, which is available in the stock centers, has not yet been analyzed so far remains mysterious.

Had Reviewer #1 actually engaged with our proposal, this would remain a mystery to them no longer: the analysis of this strain is part of our proposal. If it had been possible to analyze this strain without this proposal, the proposal would not have been written. Had Reviewer #1 ever written a research proposal of their own, they would understand that proposals are written to fund experiments that have not yet been performed. Hence, Reviewer #1 is indeed part of the answer: without their unqualified dismissal of our proposal, we would already be closer to analyzing this strain.

Moreover, reading the entire third section of this application “genome editing using MiMIC” reveals that the applicant has not understood the rational behind the MiMIC technique at all. Venken et al clearly published that “Insertions (of the Minos-based MiMIC transposon) in coding introns can be exchanged with protein-tag cassettes to create fusion proteins to follow protein expression and perform biochemical experiments.” Importantly, insertions have to be in an intron!!!! The entire paragraph demonstrates the careless generation of this application. “we will characterize the expression of eGFP in the MiMIC transposen”. Again, a short look into the Venken et aI., paper demonstrates the uselessness of this approach.

Reading this entire paragraph reveals that Reviewer #1 has neither noticed Fig. 5 in the proposal, nor understood that we do not follow Venken et al. in our proposal (which is the reason we do not even cite Venken et al.), but Vilain et al. and Zhang et al. Precisely because the methods explained in Venken et al. do not work in our case, we will follow Vilain et al. and Zhang et al., where this is not an issue. Venken et al. are not cited in the proposal, as we expect the reviewers to be expert peers. Discussing and explaining such issues at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Moreover, just a few weeks ago, at the RMCE session of a meeting, I attended a presentation of the senior author of Zhang et al., Frank Schnorrer, where he essentially explained their method, which I proposed (see Fig. 5 in the proposal). He later confirmed that there are no problems with using their RMCE approach for the specific case of the FoxP gene with the insertion in an exon. Hence, the presentation of Dr. Schnorrer as well as my later discussion with him confirmed the suspicion that Reviewer #1 lacks not only the expertise in the current methods, but also failed to notice the alternative methods by Zhang et al. and Vilain et al. even though we cite these publications and provide an entire figure detailing the approach on top of the citation and explanations in the text.

Finally, had Reviewer #1 been an expert in the field, they would be aware that the laboratory of Hugo Bellen is currently generating intron-based MiMIC lines for all those lines where the MiMIC cassette happened to insert elsewhere. Our statement in the proposal comes to mind in this respect: “In fact, by the time this project will commence, there will likely be a genome editing method published, which is even more effective and efficient than the ones cited above. In this case, we will of course use that method.”

The application requests two students. Although the entire application is far from being fundable, this request adds the dot on the i. The student is planned for the characterization of lines that are not available, characterization of antibodies that likely will not be on hand in the next two years and so on. In summary, this is a not well prepared application, full of mistakes and lacking some necessary preliminary data.

Had Reviewer #1 been an expert in the field, they would know that performing the kind of behavioral experiments we propose requires training and practice – time which is not required for applying established transgenic techniques. Thus, there is already a time lag between generating lines and testing them, inherent to the more time-intensive training required for behavioral experiments. This time lag can be supported and extended by hiring one student first and the second somewhat later.

In addition, as emphasized by Reviewer #1 themselves (and outlined in our proposal), there are still lines available that have not been thoroughly characterized, yet, such that any missing lag can easily be filled with characterizing these strains. If any of the available strains show useful characteristics, the corresponding new lines do not have to be generated. Moreover, many of the available transgenic lines also need to be characterized on the anatomical level as well (also outlined in the proposal).

Finally, by the time this project can commence, given the projects in the other groups working on FoxP, there will likely be yet new lines, generated elsewhere, that also warrant behavioral and/or anatomical characterization. Thus, the situation remains as described in the proposal: two students with complementary interests and training are required for our proposal and a small initial lag between the students is perfectly sufficient to accommodate both project areas.

In this way, one can expect at least one year in which the first student can start generating new lines at a time when the second student either has not started yet, is training or is testing lines that already exist.

These issues are only briefly discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In summary, I could not find any issue raised in this review that is not either generally known in the field, or covered either in the literature, or in our proposal. Hence, I regret to conclude that there is not a single issue raised by Reviewer #1 that I would be able to address in my revised proposal. The proposal may not be without its flaws and the other reviewer was able to contribute some valuable suggestions, so I’ve put it out on thinklab for everyone to compare it to the review and contribute meaningful and helpful criticism. Unqualified dismissal of the type shown above only unnecessarily delays science and may derail the careers of the students who hoped to be working on this project.

If we all had less material to review, perhaps also Reviewer #1 above would take the time and read the literature as well as the proposal, before writing their review. But perhaps I have it all wrong and Reviewer #1 was right to dismiss the proposal like they did? If so, you are now in a position to let me know as both the proposal and the review are open and comments are invited. Perhaps making all peer-review this open can help reduce the incidence of such reviews, even if the amount of reviewing cannot be significantly reduced?

Like this:

Like Loading...
Posted on December 1, 2015 at 18:13 56 Comments
Nov25

Data Diving for Genomics Treasure

In: own data • Tags: Drosophila, evolution, open data, transposons

This is a post written jointly by Nelson Lau from Brandeis and me, Björn Brembs. In contrast to Nelson’s guest post, which focused on the open data aspect of our collaboration, this one describes the science behind our paper and a second one by Nelson, which just appeared in PLoS Genetics.

ResearchBlogging.orgLaboratories around the world are generating a tsunami of deep-sequencing data from nearly every organism, past and present. These sequencing data range from entire genomes to segments of chromatin to RNA transcripts. To explore this ocean of “BIG DATA”, one has to navigate through portals of the National Computational Biotechnology Institute’s (NCBI’s) two signature repositories, the Sequencing Read Archive (SRA) and the Gene Expression Omnibus (GEO). With the right bioinformatics tools, scientists can explore and discover freely-available data that can lead to valuable new biological insights.

Nelson Lau’s lab in the Department of Biology at Brandeis has recently completed two such successful voyages into the realm of genomics data mining, with studies published in the Open Access journals of Nucleic Acids Research (NAR) and the Public Library of Science Genetics (PLoSGen).   Publication of both these two studies was supported by the Brandeis University LTS Open Access Fund for Scholarly Communications.

In this scientific journey, we made use of important collaborations with labs from across the globe. The NAR study used openly shared genomics data from the United Kingdom (Casey Bergman’s lab) and Germany (Björn Brembs’ lab). The PlosGen study relied on contributions from Austria (Daniel Gerlach), Australia (Benjamin Kile’s lab), Nebraska (Mayumi Naramura’s lab), and next door neighbors (Bonnie Berger’s lab at MIT).

In the NAR study, Lau lab postdoctoral fellow Reazur Rahman and the Lau team developed a program called TIDAL (Transposon Insertion and Depletion AnaLyzer) that scoured over 360 fly genome sequences publicly accessible in the SRA portal. We discovered that transposons, also known as jumping genetic parasites, formed different genome patterns in every fly strain. There are many thousands of transposons throughout the fly genome. The vast majority of these transposons share a virus origin, being retrotransposons. Even though most of these transposons are located in the intergenic and heterochromatic regions of the fly genome, with on average more than two transposons per fly gene, it is a straightforward assumption that some of them are bound to influence gene expression in one way or another.

We discovered that common fly strains with the same name but living in different laboratories turn out to have very different patterns of transposons. This is surprising because many geneticists have assumed that the so-called Canton-S or Oregon-R strains are all similar and thus used as a common wild-type reference. In particular, we were able to differentiate two strains which had only been separated very recently from each other, indicating rapid evolution of these transposon landscapes.

Our results lend some mechanistic insight to behavioral data from the Brembs lab which had shown that these sub-strains of the standard Canton-S reference stock can behave very differently in some experiments. We hypothesize that these differences in transposon landscapes and the behavioral differences may reflect unanticipated changes in fly stocks, which are typically assumed to remain stable under laboratory culture conditions. If even recently separated fly stocks can be differentiated both on the genetic and on the behavioral level, perhaps this is an indication that we are beginning to discover mechanisms rendering animals much more dynamic and malleable than we usually give them credit for. Such insights should not only convince geneticists to think twice and be extra careful with their common reference stocks, it may also provide food for thought for evolutionary biologists.  In addition, we hope to utilize the TIDAL tool to study how expanding transposon patterns might alter genomes in aging fly brains, which may then explain human brain changes during aging.

Screenshot of the TIDAL-Fly website:

tidal

Given the number of potentially harmful mobile genetic elements in a genome, it is not surprising that counter-measures have evolved to limit the detrimental effect of these transposons. So-called Piwi-interacting RNAs (piRNA) are a class of highly conserved, small, noncoding RNAs associated with repressing transposon gene expression, in particular in the germline. In the PLoSGen study, visiting scientist Gung-wei Chirn and the Lau lab developed a program that discovered expression patterns of piRNA genes in a group of mammalian datasets extracted from the GEO portal. Coupling these datasets with other small RNA datasets created in the Lau lab, the team discovered a remarkable diversity of these RNA loci for each species, suggesting a high rate of diversification of piRNA expression over time. The rate of diversification in piRNA expression patterns appeared to be much faster than in that changes of testis-specific gene expression patterns amongst different animals.

It has been known for a while that there is an ongoing evolutionary arms race between transposon intruders and the anti-transposon police, the piRNAs. In mammals, however, the piRNAs appear to diversify according to two different strategies. Most of the piRNA genomic loci discovered in humans were quite distinct from those in other primates like the macaque monkey or the marmoset and seemed to evolve just as quickly as, e.g. Drosophila piRNA genes. On the other hand, a separate, smaller set of these genomic loci have conserved their piRNA expression patterns, extending across humans, through primates, to rodents, and even to dogs, horses and pigs.

These conserved piRNA expression patterns span nearly 100 million years of evolution, suggesting an important function either in regulating a transposon that is common among most if not all eutherian mammals, or in regulating the expression of another, conserved gene.

To find the answer, the Lau lab studied the target sequences of different conserved piRNAs. One of them was indeed a conserved gene in eutherian mammals, albeit not one of a transposon, but of an endogenous gene. In fact, most of the conserved piRNA genes were depleted of transposon-related sequences. A second approach to test the function of conserved piRNAs was to analyze two existing mouse mutations in two piRNA loci. The results showed that the mutations indeed affected the generation of the piRNAs, and these mice were less fertile because their sperm count was reduced. Future work will explore how infertility diseases may be linked to these specific piRNA loci. It also remains to be understood how a gene family originally evolved as transposon police could evolve into a mechanism regulating endogenous genes.

In summary, this work is an example of how open data enables and facilitates novel insights into fundamental biological processes. In this case, these insights have taught us that genomes are much more dynamic and diverse than we have previously thought, with repercussions not only for the utility any single reference genome can have for research, but also for the role of sequencing individual genomes in personalized medicine.


Rahman R, Chirn GW, Kanodia A, Sytnikova YA, Brembs B, Bergman CM, & Lau NC (2015). Unique transposon landscapes are pervasive across Drosophila melanogaster genomes. Nucleic acids research PMID: 26578579, DOI: 10.1093/nar/gkv1193
Chirn, G., Rahman, R., Sytnikova, Y., Matts, J., Zeng, M., Gerlach, D., Yu, M., Berger, B., Naramura, M., Kile, B., & Lau, N. (2015). Conserved piRNA Expression from a Distinct Set of piRNA Cluster Loci in Eutherian Mammals PLOS Genetics, 11 (11) DOI: 10.1371/journal.pgen.1005652

Like this:

Like Loading...
Posted on November 25, 2015 at 10:41 8 Comments
Nov19

Guest post: Why our Open Data project worked

In: science politics • Tags: Drosophila, open data, open science

Why our Open Data project worked,
(and how Decorum can allay our fears of Open Data).

I am honored to Guest Post on Björn’s blog and excited about  the interest in our work from Björn’s response to Dorothy Bishop’s first post. As corresponding author on our paper, I will provide more context to our successful Open Data experience with Björn’s and Casey’s labs.  I will also comment on why authorship is an important component to a decorum that our scientific society needs to set to make the Open Data idea work (an issue raised on Dorothy Bishop’s post).

It was my idea to credit Björn and Casey with authorship after Casey explained to me that they had not yet completed their own analyses of these genomes. Casey suggested we respect the decorum previously set by genome sequencing centers providing early release of their sequencing data: the genome centers reserve a courtesy to be the first to publish on the data. Trained as a genomicists, I was aware of this decorum, hence my offer to collaborate with B&C as authors. I viewed their data as highly valuable as a precious antibody or mutant organism, which if not yet published, is a major contribution that the original creators should receive credit for providing.

Open Data Sharing worked because an honor code existed between all of us scientists. It is not because I don’t have tenure yet do I choose to respect B&C’s trust in the Open Data idea. I could have easily been an A**hole and published our analyses without crediting B&C with authorship. I already downloaded the data, and our work was well underway.  In addition, Björn offered to only be acknowledged without authorship.  However, I believe that scenario would add fodder to the fear of Open Data Sharing, and good will, such as  authorship, is what we sorely need in our hyper-competitive enterprise. Finally, I believe good will encouraged B&C to provide us further crucial insight that greatly improved our paper.

Our Open Data effort is also a counter-example to the fear that Open Data will expose our mistakes.  Our study examined many  other Drosophila genomes sequenced by other labs besides B&C’s data.  We shared our findings with other labs before writing our manuscript, enabling one lab to tweak their study in a better direction. This lab thanked us personally for our keen eye; and noted that our help with re-examining data repositories, an often thankless effort, turned out to be critical for them and they commended us for doing so.  Thus,  the “win-win” of Open Data Sharing can be truly far-reaching.

That being said, the Open Data Sharing movement needs to develop decorum, akin to laws and culture values providing decorum to make Capitalism and Open Society work. For example, absence of decorum allows a good thing like Capitalism to be perverted (i.e. worker abuse, cartels, insider trading, Donald Trump).

With Open Data, the lack of decorum can lead to misunderstanding and animosity in our scientific society. Take the Twitter firestorm of the YG-MS-ENCODE controversy. A young scientist could see this as another cautionary tale against Open Data Sharing.

I am unaware if our scientific society has a decorum yet for best practices with Open Data, and it is a worthy debate on whether Twitter or private email is the best way to communicate early Open Data analyses. With decorum on Open Data Sharing  in place, I believe we can reduce the antagonism and paranoia that shrouds our current high-stakes scientific climate like how greenhouse gases shroud our planet. Unchecked, we will doom ourselves and our fields of study.

In closing, our experience shows how Open Data Sharing is a tremendous concept all scientists should embrace and promote against the naysayers. I am looking forward to more Open Data inspired research coming from my lab in the future.

Nelson Lau, Ph.D.

nlau@brandeis.edu
Assistant Professor – Biology
Brandeis University
415 South St, MS029
Science Receiving
Waltham MA 02454, USA
Ph: 781-736-2445
https://www.bio.brandeis.edu/laulab

Like this:

Like Loading...
Posted on November 19, 2015 at 10:53 15 Comments
Nov16

Don’t be afraid of open data

In: science politics • Tags: Drosophila, genomics, open data

This is a response to Dorothy Bishop’s post “Who’s afraid of open data?“.

After we had published a paper on how Drosophila strains that are referred to by the same name in the literature (Canton S), but came from different laboratories behaved completely different in a particular behavioral experiment, Casey Bergman from Manchester contacted me, asking if we shouldn’t sequence the genomes of these five fly strains to find out how they differ. So I went and behaviorally tested each of the strains again, extracted the DNA from the 100 individuals I had just tested and sent the material to him. I also published the behavioral data immediately on our GitHub project page.

Casey then sequenced the strains and made the sequences available, as well. A few weeks later, both Casey and I were contacted by Nelson Lau at Brandeis, showing us his bioinformatics analyses of our genome data. Importantly, his analyses wasn’t even close to what we had planned. On the contrary, he had looked at something I (not being a bioinformatician) would have considered orthogonal (Casey may disagree). So there we had a large chunk of work we would have never done on the data we hadn’t even started analyzing, yet. I was so thrilled! I learned so much from Nelson’s work, this was fantastic! Nelson even asked us to be co-authors, to which I quickly protested and suggested, if anything, I might be mentioned in the acknowledgments for “technical assistance” – after all, I had only extracted the DNA.

However, after some back-and-forth, he persuaded me with the argument that he wanted to have us as co-authors to set an example. He wanted to show everyone that sharing data is something that can bring you direct rewards in publications. He wanted us to be co-authors as a reward for posting our data and as incentive for others to let go of their fears and also post their data online. I’m still not quite sure if this fits the COPE guidelines to the point, but for now I’m willing to take the risk and see what happens.

Nelson is on the tenure clock and so the position of each of his paper’s in the journal hierarchy matters. The work is now online at Nucleic Acids Research and both Casey and I are co-authors. The paper was published before Casey has even gotten around to start his own analyses of our data. This is how science ought to proceed! Now we just need ways to credit such re-use of research data in a currency that’s actually worth something and doesn’t entail making people ‘authors’ on publications where they’ve had little intellectual input. A modern infrastructure would take care of that…

Until we have such an infrastructure, I hope this story will make others share their data and code as well.

Like this:

Like Loading...
Posted on November 16, 2015 at 12:31 94 Comments
Nov12

Chance in animate nature, day 3

In: science • Tags: chance, free will, nonlinearity

On our final day (day 1, day 2), I was only able to hear Boris Kotchoubey‘s (author of “why are you free?“) talk, as I had to leave early to catch my flight. He made a great effort to slowly introduce us to nonlinear dynamics and the consequences it has for the predictive power of science in general.

Applied to human movement in particular, he showed that nervous systems take advantage of the biophysics of bodily motion to only add the component to movement, that biophysics (think your leg swinging while you walk) doesn’t already take care of. This is an important and all too often forgotten insight that I recognize from the work of the laboratory of Hillel Chiel in Aplysia biting behavior. He explained work studying hammer blows, where the trajectory of all arm joints did not seem to follow any common rules – the only commonality that could be found between individual hammer blows was the trajectory of the hammer’s head. This is reminiscent of the distinction between world- and self-learning in flies, where the animals can use any behavior very flexibly to accomplish an external goal, until they have performed the behavior often enough to become habitual, at which time this flexibility is (partially?) lost.

Halfway through the talk, he arrived at the uncontrolled manifold hypothesis, where the nervous system isn’t trying to eliminate noise, but to use it to its advantage for movement control. Not entirely unexpectedly, he went from this to chemotaxis in Escherichia coli as an example of a process which also takes advantage of chance.

He differentiates between two different kinds of unpredictable systems: a) highly complex and incomputable systems b) unique unrepeatable systems. The differences between these two systems breakdown as soon as the uncertainly principle is an actual property of the universe that poses absolute, non-circumventable limits on the potential knowledge anyone can have about these systems.

Like this:

Like Loading...
Posted on November 12, 2015 at 10:22 4 Comments
Nov11

Chance in animate nature, day 2

In: science • Tags: chance, Drosophila, free will

While the first day (day 2, day 3) was dominated by philosophy, mathematics and other abstract discussions of chance, this day of our symposium started with a distinct biological focus.

Martin Heisenberg, Chance in brain and behavior

First speaker for this second day on the symposium on the role of chance in the living world was my thesis supervisor and mentor, Martin Heisenberg. Even if he hadn’t a massive body of his own work to contribute to this topic, just being the youngest son of Werner Heisenberg of uncertainty principle fame, made his presence interesting already from a science history perspective. In his talk, he showed many examples from the fruit fly Drosophila, which showed how the fly spontaneously chooses between different options, both in terms of behavior and in terms of visual attention. Central to his talk was the concept of outcome expectations in the organization of adaptive behavioral choice. Much of this work is published and can be easily found, so I won’t go into detail here.

Then came my talk where I presented a somewhat adjusted version of my talk on the organization of behavior, where I provide evidence how even invertebrate brains generate autonomy and liberate themselves from the environment:

Friedel Reischies, Limited Indeterminism – amplification of physically stochastic events

Third speaker this morning was Friedel Reischies, psychiatrist from Berlin. After introducing some general aspects of brain function, he discussed various aspects of the control of behavioral variability. He also talked about the concept of self and how we attribute agency to our actions, citing D. Wegner. Referring to individual psychiatric cases he talked about different aspects of freedom and how these cases differentially impinge on these aspects. Central theme of his talk was the variability of nervous systems / behavior and its control.

The discussion session after these first three talks circulated quite productively around intentionality, decision-making, free will and the concept of self.

Wolfgang Lenzen: Does freedom need chance?

The third speaker for this day was a philosopher, Wolfgang Lenzen. As it behooves a philosopher, he started out with an attempt to define the terms chance, possibility, necessity and contingency, as well as some of their variants. Here again, as yesterday, the principle of sufficient reason reared its head again. He then went back to Cicero and Augustine to exemplify the problem of free will with respect to determinism and causality. Later the determinist Hume was cited as the first compatibilist, allowing for an exception to determinism in the context of the will. Lenzen then described Schopenhauer as a determinist. Given the dominance of classical Newtonian mechanics, the determinism of the philosophers at the time are not surprising. The now dominant insights from relativity and quantum mechanics had a clear effect on the more recent philosophers. Lenzen then cited Schlick who predictably argued with the false dichotomy of our behavior either being determined or entirely random. Other contemporary determinist scholars cited were Roth and Prinz. In his (as I see it compatibilist) reply, he emphasized that free will is not dependent on the question of whether the world is deterministic. He also defined free will as something only adult humans have, that it requires empathy and theory of mind. In his view, animals do not possess free will as they do not reflect their actions. Hence, animals cannot be held responsible. Similar to other scholars, he listed three criteria for an action to be ‘free’: the person willed the action, the will is the cause of the action and the person could have acted otherwise.

Lenzen went on to disavow dualism: “there are no immaterial substances”. This implies that the soul or the mind as a complex mental/psychological human property is intimately, necessarily coupled to a healthy, material brain. It also implies that “mental causation” does not mean that immaterial mind interacts with a material brain. Mental causation can only be thought as an idea or thought being a neuronal activity which in principle or in actuality can move muscles.

Towards the end, Lenzen picked up the different variants of possibilities from his introduction to apply them to the different variants of alternative actions of an individual. At the end he recounted the story of Frankfurt‘s evil neurosurgeon as a “weird” example he didn’t find very useful.

Patrick Becker: Naturalizing the mind?

The final speaker for the day was a theologian and in my prejudice I expected pretty confuse magical thinking. I had no idea when he stated, how right I would be. Like some previous speakers, Becker also cited a lot of scholars (obviously a common method in the humanities) like Prinz, Metzinger, or Pauen. Pauen in particular served for the introduction of the terms autonomy and agency as necessary conditions for free will. In this context again, the false dichotomy of either chance or necessity being the only possible determinants of behavior, reared its ugly head. Becker went on to discuss Metzinger’s “Ego-Tunnel” and the concept of self as a construct of our brain, citing experiments such as the “rubber hand illusion“. It wasn’t clear to me what this example was meant to actually say. At the end of all this Becker presented a table where he juxtaposed a whole host of terms under ‘naturalization’ on one side and ‘common thought’ on the other side. The whole table looked like an arbitrary collection of false dichotomies to me and I again didn’t understand what the point of that was. He then picked ethical behavior as an example for how naturalization would lead to an abandonment of ethics. Here, again, the talk was full of false dichotomies such as: our ethics are not rational because some basic, likely evolved moral sentiments exist. As if it were impossible to combine the two. Not sure how that would be an answer to the question of his title. After ethics, he claimed that we would have to part with love and creativity as well if we naturalized the mind. None of what he talked about appeared even remotely coherent to me, nor did I understand how he came up with so many arbitrary juxtapositions of seemingly randomly collected terms and concepts. Similar to creationists, he posits that our empirically derived world-view is just a belief system – he even used the German word ‘Glaube’ which can denote both faith and belief. As if all of this wasn’t bad enough, at the very end, as a sort of conclusion or finale of this incoherent rambling, he explicitly juxtaposed (again!) the natural sciences and religion as equivalent, yet complementary descriptions of the world.

Like this:

Like Loading...
Posted on November 11, 2015 at 17:35 6 Comments
Nov10

Chance in animate nature, day 1

In: science • Tags: causality, chance, interdisciplinary, symposium

Ulrich Herkenrath, a mathematician working on stochasticity, convened a tiny symposium of only about a dozen participants discussing the role of chance in living beings. Participants included mathematicians, philosophers and neurobiologists.

Herkenrath: “Man as a source of randomness”

Herkenrath kicked off the symposium with his own presentation on “Man as a source of randomness”. He explained some principal insights on stochasticity and determinism as well as some boundary conditions for empirical studies on stochastic events, emphasizing that deterministic chaos and stochasticity can be extremely difficult to empirically distinguish.

In a short excursion, he referred to Nikolaus Cusanus, who found that no two subsequent situations can ever be exactly identical, our knowledge being thus essentially conjecture. Apparently, Cusanus was already proposing to falsify hypotheses as a means to approaching ‘truth’. Not surprisingly, he immediately referred to Popper with regards to the modern scientific method. Equally expectedly, when he started talking about kinds and sources of chance, he talked about quantum mechanics.

Moving from inanimate to living nature, he proposed amplifications of quantum chance to the macroscopic level as sources of objective randomness in the mesocosm, always emphasizing the difficulties in distinguishing between such events and events that only seem random due to our limited knowledge.  Contrasting two hypotheses of a deterministic world and one where objective randomness exists, he mentions the illusory nature of our subjective impression of freedom of choice. He never got into the problem that quantum randomness, if only amplified, leaves much to be desired in terms of decision-making. Essentially, he seemed to be arguing that a deterministic world would be a sad place in which he doesn’t want to live, so he rejects a deterministic world. I’ve never found this all too common argument very convincing.

Notably, Herkenrath mentioned that organisms are more than matter. Not sure what to make of this. He defined autonomy as the ability to make decisions that are not determined by the environment. Herkenrath went on to describe classes of decisions, such as subconscious and conscious decisions. How brains make these different forms of decisions will be featured in different talks at the symposium. Herkenrath defined a third class of decisions those that have come about by explicit (subconscious or conscious) randomization. A fourth class is proposed, where a uniform distribution is consciously generated, e.g. a human using a lottery.

Falkenburg: “Causality, Chance and Life”

The second speaker of the first day was Brigitte Falkenburg, author of “Mythos Determinismus” (book critique). She started out wondering how neural determinists understand evolution.

In Falkenburg’s tour de force through the idea history of chance and necessity, we first learned that the concept of chance itself can be traced back to Leibniz, who described events that may have happened otherwise. Leibniz claimed in his metaphysics that objective chance does not exist, as the whole world is rational and determined. According to Leibniz, everything has a sufficient reason. In a very scholarly segue mentioning the dispute between Leibniz and Newton about who invented calculus, she moved to the relationship between the laws of nature and chance. Kant extended Newtons mechanistic laws from the solar system to the entire universe (Kant-Laplace hypothesis). In his “critique of pure reason” Kant later concluded that Leibniz’s ‘sufficient reasons’ are better described as ’causes’ and formulated the principle of causality as an ‘a priori’ of human thinking. This was the start of the demand for causal explanations in the empirical sciences: science never stops asking for causes. However, Kant’s critique did not fully pervade the subsequent thinking, leading instead to Laplace‘s determinism. Laplace was convinced that our insufficient knowledge is the only reason for apparent (subjective) randomness, and a more knowledgeable intelligence would be able to tell the future (cf. Laplace’s demon).

With this backdrop of the idea history of causality, Falkenburg went on to discuss modern concepts of causality away from equating it with determinism. Both Hume and Kant defined causality as a mode of thinking, i.e., psychologically, rather than as a property of the universe. According to them, a causal relationship between events is subjective rather than objective. Mill‘s and Russell‘s positivism later did away with causality as “a relic of a bygone era” (Russell). One argument is that a cause can be seen as just a natural law and the initial state of a system. Deterministic laws are invariant to a reversal of time – as such,causes can also lie in the future.

Today’s philosophical variants of causality concepts reflect this comparatively weak view of causality, which are very different from the way we scientists would intuitively understand it. In a short discussion of the concept of causality in physics, she quickly went through classical mechanics, thermodynamics and quantum mechanics and special relativity, emphasizing that we still do not have a theory unifying these different approaches (she called it ‘patchwork physics’).

Towards the end, Falkenburg discussed the connection between causality and time, emphasizing that the arrow of time cannot have a deterministic basis and all deterministic laws are time reversible. As such, extreme determinism comes with a high metaphysical price: time becomes an illusion. According to Falkenburg, causality is hence not the same as determinism: a causal process is not necessarily deterministic, it can be composed of determinate and indeterminate components. Thus, if you do not think time is an illusion and all possible outcomes coexist, causality does not imply determinism and chance can be a cause as in, e.g. evolution.

At the very end she mentioned Greenfield and the limits of the natural sciences in reducing consciousness to materialism. I’m starting to get the impression that rejecting determinism all too often goes hand in hand with woo peddling. Why is that?

Like this:

Like Loading...
Posted on November 10, 2015 at 18:07 4 Comments
Oct23

Predatory Priorities

In: science politics • Tags: journals, open access, predatory publishing

Over the last few months, there has been a lot of talk about so-called “predatory publishers”, i.e., those corporations which publish journals, some or all of  which purport to peer-review submitted articles, but publish articles for a fee without actual peer-review. The origin of the discussion can be traced to a list of such publishers hosted by librarian Jeffrey Beall. Irrespective of the already questionable practice of putting entire corporations on a black list (one bad journal and you’re out?), I have three main positions in this discussion:

1. Beall’s list used to be a useful tool tracking a problem that nobody really had on their radar. Unfortunately, Jeffrey Beall himself recently opted to disqualify himself from reasoned debate, making the content of the list look more like a political hit list than a serious scholarly analysis. It appears that this approach may still be rescued if it were pursued by an organization more reliable than Beall.

2. There are many problems with publishers that eventually need to be solved. With respect to the pertinent topic, at least two main problem areas spring to mind.

2a. There is a group of publishers which publish the least reliable science. These publishers claim to perform a superior form of peer review (e.g. by denigrating other forms of peer-review as “peer-review light“), but in fact most of the submitted articles are never seen by peers (but instead by the professional editors of these journals). For this minority of articles that are indeed peer-reviewed, acceptance rate is about 40%. Sometimes this practice keeps other scientists unnecessarily busy, such as in replicability projects or #arseniclife. Sometimes this practice has deleterious effects on society, such as the recent LaCour or Stapel cases. Sometimes this practice leads indirectly to human death, such as in irreproducible cancer research. Sometimes this practice leads directly to human death, such as in the MMR/autism case.
These publishers charge the taxpayer on average US$5000 per article and try to use paywalls to prevent the taxpayer from checking the article for potential errors.

2b. There is a group of publishers which similarly claim to perform peer-review but in fact do not perform any peer-review at all. Apparently, it seems as if they aren’t even performing much editorial review. The acceptance rate in these journals is commonly a little more than twice as high as in the journals from 2a, i.e. ~100%. Other than the (likely very few) duped authors, to my knowledge there are no other harmed parties, but I may have missed them.
These publishers charge the taxpayer on average ~US$300 per article and do allow the taxpayer to check the articles for potential errors.

3. Clearly, both 2a and 2b need to be resolved, there can be no debate about that. Given the number and magnitude of issues with regard to infrastructure reform in general and publishing reform in particular, it is prudent to prioritize the problems. Given the larger harm the publishers in 2a inflict on the society at large as well as the scientific community, I would suggest to prioritize 2a over 2b. In fact, looking back over what little we have accomplished over the past 10 years of infrastructure reform, it doesn’t appear we have too many resources left to waste on 2b at this particular time. Moreover, if focusing on 2a were to lead to the demise of the journal container as so many of us hope, 2b will be solved without any further efforts.

Like this:

Like Loading...
Posted on October 23, 2015 at 16:21 37 Comments
  • Page 11 of 21
  • « First
  • «
  • 9
  • 10
  • 11
  • 12
  • 13
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,961 views)
  • Sci-Hub as necessary, effective civil disobedience (22,947 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,484 views)
  • Booming university administrations (12,906 views)
  • What should a modern scientific infrastructure look like? (11,457 views)
  • Edgewise
  • Embrace the uncertainty
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d