science politics

Update, Dec. 4, 2015: With the online discussion moving towards grantsmanship and the decision of what level of expertise to expect from a reviewer, I have written down some thoughts on this angle of the discussion.

With more and more evaluations, assessments and quality control, the peer-review burden has skyrocketed in recent years. Depending on field and tradition, we write reviews on manuscripts, grant proposals, Bachelor-, Masters- and PhD-theses, students, professors, departments or entire universities. Top reviewers at Publons clock in at between 0.5-2 reviews for every day of the year. It is conceivable that with such a frequency, reviews cannot be very thorough, or the material to be reviewed is comparatively less complex or deep. But already at a much lower frequency, time constraints imposed by increasing reviewer load make thorough reviews of complex material difficult. Hyper-competitive funding situations add incentives to summarily dismiss work perceived as infringing on one’s own research. It is hence not surprising that such conditions bring out the worst in otherwise well-meaning scientists.

Take for instance a recent grant proposal of mine, based on our recent paper on FoxP in operant self-learning. While one of the reviewers provided reasonable feedback, the other raised issues that can be shown to either be demonstrably baseless or already included in the application. I will try to show below how this reviewer, who obviously has some vague knowledge of the field in general, but not nearly enough expertise to review our proposal, should have either declined to review or at least invested some time reading the relevant literature as well as the proposal in more depth.

The reviewer writes (full review text posted on thinklab):

In flies, the only ortholog [sic] FoxP has been recently analyzed in several studies. In a report by the Miesenböck lab published last year in Science, a transposon induced mutant affecting one of the two (or three) isoforms of the FoxP gene was used to show a requirement of FoxP in decision making processes.

Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP. For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In principle, this proposal addresses important and highly relevant questions but unfortunately there are many (!) problems with this application which make it rather weak and in no case fundable.

Unfortunately, there are many problems with this review which make it rather weak and in no case worthy of consideration for a revised version of the proposal.

The preliminary work mentioned in this proposal is odd. Basically we learn that there are many RNAi lines available in the stock centers, which have a phenotype when used to silence FoxP expression but strangely do not affect FoxP expression. What does this mean?

Had Reviewer #1 been an expert in the field, they would have been aware of the RNAi issues concerning template mismatch and the selection of targeted mRNA for sequestration and degradation, respectively. For the non-expert, we explain this issue with further references in our own FoxP paper and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

I have seen no arguments why the generation of additional RNAi strains is now all the sudden expected to yield a breakthrough result.

Had Reviewer #1 been an expert in the field, they would be aware that the lines we tested were generated as part of large-scale efforts to manipulate every gene in the Drosophila genome. As such, the constructs were generated against the reference genome, which of course does not precisely match every potential strain used in every laboratory, as any expert in the field is very well aware of (explained in more detail in this blog post). Consequently, RNAi constructs directed at the specific strain used for genetic manipulation and subsequent crossing of all driver lines into this genetic background (as is the well-established technique in the collaborating Schneuwly laboratory), reliably yields constructs that lead to mRNA degradation, rather than sequestration. This discussion leaves out the known tendency of the available strains for off-target effects, compounding their problems. Dedicated RNAi constructs, such as the ones I propose to use, can be tested against off-targets beforehand.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Quite similar we learn in the preliminary result section that many attempts to generate specific antibodies failed and yet the generation of mAbs is proposed. Again, it is unclear what we will learn and alternative strategies are not even discussed.

Had Reviewer #1 been an expert in the field, they would understand the differences between polyclonal and monoclonal antibodies, in particular as the antibody technology is currently particularly hotly debated in rather prominent locations.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

The authors could consider the generation of HA-tagged Fosmids /I minigenes or could use homologous recombination to manipulate the gene locus accordingly.

Had Reviewer #1 not overlooked our Fig. 5., as well as our citations of Vilain et al. as well as Zhang et al., it may not have gone unnoticed that this type of genome editing is precisely what we propose to do.

One page 2 of the application it is stated that “It is a technical hurdle for further mechanistic study of operant self-learning that the currently available FoxP mutant lines are insertion lines, which only affect the expression level of some of the isoforms. ” This is not true! and the applicant himself states on page 11: “However, as the Mi{MIC} insertion is contained within a coding exon which is spliced into all FoxP isoforms, it is likely that this insertion alone already leads to a null mutation at the FoxP locus.” Yes, by all means the insertion of a large transposon into the open reading frame of a gene causes a mutation!!!! Why this allele, which is available in the stock centers, has not yet been analyzed so far remains mysterious.

Had Reviewer #1 actually engaged with our proposal, this would remain a mystery to them no longer: the analysis of this strain is part of our proposal. If it had been possible to analyze this strain without this proposal, the proposal would not have been written. Had Reviewer #1 ever written a research proposal of their own, they would understand that proposals are written to fund experiments that have not yet been performed. Hence, Reviewer #1 is indeed part of the answer: without their unqualified dismissal of our proposal, we would already be closer to analyzing this strain.

Moreover, reading the entire third section of this application “genome editing using MiMIC” reveals that the applicant has not understood the rational behind the MiMIC technique at all. Venken et al clearly published that “Insertions (of the Minos-based MiMIC transposon) in coding introns can be exchanged with protein-tag cassettes to create fusion proteins to follow protein expression and perform biochemical experiments.” Importantly, insertions have to be in an intron!!!! The entire paragraph demonstrates the careless generation of this application. “we will characterize the expression of eGFP in the MiMIC transposen”. Again, a short look into the Venken et aI., paper demonstrates the uselessness of this approach.

Reading this entire paragraph reveals that Reviewer #1 has neither noticed Fig. 5 in the proposal, nor understood that we do not follow Venken et al. in our proposal (which is the reason we do not even cite Venken et al.), but Vilain et al. and Zhang et al. Precisely because the methods explained in Venken et al. do not work in our case, we will follow Vilain et al. and Zhang et al., where this is not an issue. Venken et al. are not cited in the proposal, as we expect the reviewers to be expert peers. Discussing and explaining such issues at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Moreover, just a few weeks ago, at the RMCE session of a meeting, I attended a presentation of the senior author of Zhang et al., Frank Schnorrer, where he essentially explained their method, which I proposed (see Fig. 5 in the proposal). He later confirmed that there are no problems with using their RMCE approach for the specific case of the FoxP gene with the insertion in an exon. Hence, the presentation of Dr. Schnorrer as well as my later discussion with him confirmed the suspicion that Reviewer #1 lacks not only the expertise in the current methods, but also failed to notice the alternative methods by Zhang et al. and Vilain et al. even though we cite these publications and provide an entire figure detailing the approach on top of the citation and explanations in the text.

Finally, had Reviewer #1 been an expert in the field, they would be aware that the laboratory of Hugo Bellen is currently generating intron-based MiMIC lines for all those lines where the MiMIC cassette happened to insert elsewhere. Our statement in the proposal comes to mind in this respect: “In fact, by the time this project will commence, there will likely be a genome editing method published, which is even more effective and efficient than the ones cited above. In this case, we will of course use that method.”

The application requests two students. Although the entire application is far from being fundable, this request adds the dot on the i. The student is planned for the characterization of lines that are not available, characterization of antibodies that likely will not be on hand in the next two years and so on. In summary, this is a not well prepared application, full of mistakes and lacking some necessary preliminary data.

Had Reviewer #1 been an expert in the field, they would know that performing the kind of behavioral experiments we propose requires training and practice – time which is not required for applying established transgenic techniques. Thus, there is already a time lag between generating lines and testing them, inherent to the more time-intensive training required for behavioral experiments. This time lag can be supported and extended by hiring one student first and the second somewhat later.

In addition, as emphasized by Reviewer #1 themselves (and outlined in our proposal), there are still lines available that have not been thoroughly characterized, yet, such that any missing lag can easily be filled with characterizing these strains. If any of the available strains show useful characteristics, the corresponding new lines do not have to be generated. Moreover, many of the available transgenic lines also need to be characterized on the anatomical level as well (also outlined in the proposal).

Finally, by the time this project can commence, given the projects in the other groups working on FoxP, there will likely be yet new lines, generated elsewhere, that also warrant behavioral and/or anatomical characterization. Thus, the situation remains as described in the proposal: two students with complementary interests and training are required for our proposal and a small initial lag between the students is perfectly sufficient to accommodate both project areas.

In this way, one can expect at least one year in which the first student can start generating new lines at a time when the second student either has not started yet, is training or is testing lines that already exist.

These issues are only briefly discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In summary, I could not find any issue raised in this review that is not either generally known in the field, or covered either in the literature, or in our proposal. Hence, I regret to conclude that there is not a single issue raised by Reviewer #1 that I would be able to address in my revised proposal. The proposal may not be without its flaws and the other reviewer was able to contribute some valuable suggestions, so I’ve put it out on thinklab for everyone to compare it to the review and contribute meaningful and helpful criticism. Unqualified dismissal of the type shown above only unnecessarily delays science and may derail the careers of the students who hoped to be working on this project.

If we all had less material to review, perhaps also Reviewer #1 above would take the time and read the literature as well as the proposal, before writing their review. But perhaps I have it all wrong and Reviewer #1 was right to dismiss the proposal like they did? If so, you are now in a position to let me know as both the proposal and the review are open and comments are invited. Perhaps making all peer-review this open can help reduce the incidence of such reviews, even if the amount of reviewing cannot be significantly reduced?

Posted on  at 18:13