Posting my reply to a review of our most recent grant proposal has sparked an online discussion both on Twitter and on Drugmonkey’s blog. The main direction the discussion took was what level of expertise to expect from the reviewers deciding over your grant proposal.

This, of course, is highly dependent on the procedure by which the funding agency chooses the reviewers. At the US-based NIH, as I understand it, reviewers are picked from a known panel, you just don’t know which individuals of the panel. This makes it comparatively easy to target this audience when writing your proposal. The German funding agency we submitted our grant to picks any reviewer, world-wide (I think the NSF in the US is similar, at least I have reviewed a few grants for them). After the review, a separate panel of peers (which may but commonly doesn’t include a reviewer) decides which grants out of the pool get funded, usually without reading the grants in much detail, if at all. In that case, it is impossible to have a clear picture of your audience.

My first grant/fellowship was funded in 2001. Before and since then, I have had plenty of rejections. I believe my overall funding rate over these almost 15 years is somewhere around 20±5%, which means I must have written just under 50 grant proposals in this time. Initially, when my proposals (and manuscripts) got rejected, it was suspiciously often with comments that revealed misunderstandings. Once the grant-reviewer even explicitly admitted that they didn’t understand what it was that I proposed to do. I then started to simplify my proposals, in my desperation I of course did what many online commenters proposed: I added what I thought was very basic knowledge. My proposals became significantly more understandable, but also significantly longer. Imagine my disappointment, when the feedback I received then was twofold: the reviewers felt insulted I addressed them at such a basic level and the funder told me my proposals were too long: “good proposals are succinct, maybe 10-15 pages total, and compelling”.

So here’s the rule then: you need to write your proposal in a way such that your grandma can understand it, without the reviewers noticing that you are insulting their intelligence and with no more than 1-2 sentences per explanation.

Honestly, I find this quite far beyond my capabilities. Instead, I have since focused on the easier task of being succinct at the expense of explaining everything. For the last ~8 years I’ve assumed that the people reading my proposals are either experts in the model system(s) I use or in the problems I study, not both. The implicit expectation is that the former don’t need to understand every motivation behind each experiment (and won’t require it, either) and the latter won’t be too concerned with the technical details of a model system they might not be all that familiar with. Until this last proposal, this has worked to the extent that even for the ~80% of rejections I received, the reviewer comments revealed neither obvious incompetence nor substantial misunderstandings. However, given the system by which reviewers are selected, it is of course impossible to know if this was due to my improved writing or due to the chosen reviewers. Moreover, the field as grown substantially and become much more popular in this time, so it simply may have been down to a larger pool of experts than a decade ago.

It is also important to keep in mind that with each submission even of the same grant, there may be different reviewers assigned. At the ERC, for instance, one of my proposals was rejected despite the reviewers being very enthusiastic about the idea, but because they questioned the feasibility of the method. In the revision, the reviewers thought the method was too established to warrant funding and the research question wasn’t all that interesting, either.

There were two very helpful comments in the Twitter discussion that I will keep in mind for future proposals, both were from Peg AtKisson, a professional supporter of grant writers:

I agree that minimizing in-group bias is a goal worth investing in. However, this goal comes at a cost (which is an investment, I’d argue): you can’t have non-experts review and expect the author to not need more words for it. You also have to accept that if you promote non-expert review, you may annoy the experts with more verbose applications. If there are no explicit instructions, it virtually impossible to know where on this trade-off one has to land.

The suggestion to also explicitly mention methods that you rejected because they are unsuitable is one worth pondering over. If there are no word-limits, this sounds very compelling as it “shows your thinking” which is always helpful. It is also quite difficult to decide which ones to include, as it, again, involves the risk of insulting the reviewers (i.e., “only an idiot would have thought to use approach Y!”). Again, the instructions from the funder and experience will have to suffice, but I’ll definitely spend more time thinking about alternative, less suitable approaches next time.

Back to our particular case, the main issues can be boiled down to three criticisms. Since all of them concern the technical details of our experimental techniques, it is fair to assume that the person considers themselves competent at least on the technical/model system side of the proposal.

The first issue concerns a common laboratory technique which I have taught to undergraduate classes, which is widely used not only in our field but in all biological/biomedical research generally, for which Fire and Mello received the Nobel prize in 2006 and where all the technical details required for this grant are covered on the Wikipedia page (of course it’s also in all textbooks). Nothing beyond this basic understanding is required for our grant. The criticisms raised only make sense if the reviewer is not aware of the sequestration/degradation distinction.

The second concerns an even older technique which is also used in all biological/biomedical research, for which the Nobel was handed out already in 1984, the technical info is also on the Wikipedia page and it’s of course part of every undergraduate biology/medicine education I know of. Moreover, this technology is currently debated in the most visible places in the wake of the biomedical replicability crisis. Nothing beyond this most basic understanding is required for our proposal. The criticisms of the reviewer only make sense if the reviewer is not aware of the differences between monoclonal and polyclonal antibodies.

From where I sit, this kind of basic knowledge is what can be expected from a reviewer who picks these two methods (out of the four main grant objectives) as their target for criticism.

The third issue can be seen as a reviewer classic: the reviewer chided us for proposing a method we didn’t propose and suggested we instead use a method we already had prominently described in our proposal, even with a dedicated figure to make unambiguously clear we weren’t proposing the technique the reviewer rightfully rejected, but the one they recommended. Here, everything the reviewer wrote was correct, but so was our proposal: it paralleled what they wrote.

In summary: of our four objectives in the grant, this reviewer picked three for criticism. Two of the three criticisms lack undergraduate knowledge of very common, widespread techniques. The third issue is nonexistent, as the grant already describes, prominently, what the reviewer suggests. I will take the online suggestions and incorporate them into the revised version of the grant, but there really isn’t anything helpful at all one can take from this particular review. For me personally, at this time, this is an exception, but it chimes with what a lot of colleagues, on both sides of the pond, complain about.

(Visited 33 times, 19 visits today)
Posted on  at 14:48