Posting my reply to a review of our most recent grant proposal has sparked an online discussion both on Twitter and on Drugmonkey’s blog. The main direction the discussion took was what level of expertise to expect from the reviewers deciding over your grant proposal.
This, of course, is highly dependent on the procedure by which the funding agency chooses the reviewers. At the US-based NIH, as I understand it, reviewers are picked from a known panel, you just don’t know which individuals of the panel. This makes it comparatively easy to target this audience when writing your proposal. The German funding agency we submitted our grant to picks any reviewer, world-wide (I think the NSF in the US is similar, at least I have reviewed a few grants for them). After the review, a separate panel of peers (which may but commonly doesn’t include a reviewer) decides which grants out of the pool get funded, usually without reading the grants in much detail, if at all. In that case, it is impossible to have a clear picture of your audience.
My first grant/fellowship was funded in 2001. Before and since then, I have had plenty of rejections. I believe my overall funding rate over these almost 15 years is somewhere around 20±5%, which means I must have written just under 50 grant proposals in this time. Initially, when my proposals (and manuscripts) got rejected, it was suspiciously often with comments that revealed misunderstandings. Once the grant-reviewer even explicitly admitted that they didn’t understand what it was that I proposed to do. I then started to simplify my proposals, in my desperation I of course did what many online commenters proposed: I added what I thought was very basic knowledge. My proposals became significantly more understandable, but also significantly longer. Imagine my disappointment, when the feedback I received then was twofold: the reviewers felt insulted I addressed them at such a basic level and the funder told me my proposals were too long: “good proposals are succinct, maybe 10-15 pages total, and compelling”.
So here’s the rule then: you need to write your proposal in a way such that your grandma can understand it, without the reviewers noticing that you are insulting their intelligence and with no more than 1-2 sentences per explanation.
Honestly, I find this quite far beyond my capabilities. Instead, I have since focused on the easier task of being succinct at the expense of explaining everything. For the last ~8 years I’ve assumed that the people reading my proposals are either experts in the model system(s) I use or in the problems I study, not both. The implicit expectation is that the former don’t need to understand every motivation behind each experiment (and won’t require it, either) and the latter won’t be too concerned with the technical details of a model system they might not be all that familiar with. Until this last proposal, this has worked to the extent that even for the ~80% of rejections I received, the reviewer comments revealed neither obvious incompetence nor substantial misunderstandings. However, given the system by which reviewers are selected, it is of course impossible to know if this was due to my improved writing or due to the chosen reviewers. Moreover, the field as grown substantially and become much more popular in this time, so it simply may have been down to a larger pool of experts than a decade ago.
It is also important to keep in mind that with each submission even of the same grant, there may be different reviewers assigned. At the ERC, for instance, one of my proposals was rejected despite the reviewers being very enthusiastic about the idea, but because they questioned the feasibility of the method. In the revision, the reviewers thought the method was too established to warrant funding and the research question wasn’t all that interesting, either.
There were two very helpful comments in the Twitter discussion that I will keep in mind for future proposals, both were from Peg AtKisson, a professional supporter of grant writers:
@brembs Disagree because diverse inputs tend to lead to better outcomes. Only experts reviewing in narrow area leads to in-group bias.
— M. S. AtKisson (@iGrrrl) December 3, 2015
I agree that minimizing in-group bias is a goal worth investing in. However, this goal comes at a cost (which is an investment, I’d argue): you can’t have non-experts review and expect the author to not need more words for it. You also have to accept that if you promote non-expert review, you may annoy the experts with more verbose applications. If there are no explicit instructions, it virtually impossible to know where on this trade-off one has to land.
@brembs "We have chosen X approach because… We did not chose Y because…" Show your thinking. @drugmonkeyblog
— M. S. AtKisson (@iGrrrl) December 3, 2015
The suggestion to also explicitly mention methods that you rejected because they are unsuitable is one worth pondering over. If there are no word-limits, this sounds very compelling as it “shows your thinking” which is always helpful. It is also quite difficult to decide which ones to include, as it, again, involves the risk of insulting the reviewers (i.e., “only an idiot would have thought to use approach Y!”). Again, the instructions from the funder and experience will have to suffice, but I’ll definitely spend more time thinking about alternative, less suitable approaches next time.
Back to our particular case, the main issues can be boiled down to three criticisms. Since all of them concern the technical details of our experimental techniques, it is fair to assume that the person considers themselves competent at least on the technical/model system side of the proposal.
The first issue concerns a common laboratory technique which I have taught to undergraduate classes, which is widely used not only in our field but in all biological/biomedical research generally, for which Fire and Mello received the Nobel prize in 2006 and where all the technical details required for this grant are covered on the Wikipedia page (of course it’s also in all textbooks). Nothing beyond this basic understanding is required for our grant. The criticisms raised only make sense if the reviewer is not aware of the sequestration/degradation distinction.
The second concerns an even older technique which is also used in all biological/biomedical research, for which the Nobel was handed out already in 1984, the technical info is also on the Wikipedia page and it’s of course part of every undergraduate biology/medicine education I know of. Moreover, this technology is currently debated in the most visible places in the wake of the biomedical replicability crisis. Nothing beyond this most basic understanding is required for our proposal. The criticisms of the reviewer only make sense if the reviewer is not aware of the differences between monoclonal and polyclonal antibodies.
From where I sit, this kind of basic knowledge is what can be expected from a reviewer who picks these two methods (out of the four main grant objectives) as their target for criticism.
The third issue can be seen as a reviewer classic: the reviewer chided us for proposing a method we didn’t propose and suggested we instead use a method we already had prominently described in our proposal, even with a dedicated figure to make unambiguously clear we weren’t proposing the technique the reviewer rightfully rejected, but the one they recommended. Here, everything the reviewer wrote was correct, but so was our proposal: it paralleled what they wrote.
In summary: of our four objectives in the grant, this reviewer picked three for criticism. Two of the three criticisms lack undergraduate knowledge of very common, widespread techniques. The third issue is nonexistent, as the grant already describes, prominently, what the reviewer suggests. I will take the online suggestions and incorporate them into the revised version of the grant, but there really isn’t anything helpful at all one can take from this particular review. For me personally, at this time, this is an exception, but it chimes with what a lot of colleagues, on both sides of the pond, complain about.
First, Brembs, thanks you for your openness/honesty, especially considering some of the comments from others. Here are my thoughts on your response:
1. While I sympathize with your frustration in balancing explanations/brevity, I think choosing one at the expense of the other is a poor decision. You need both, especially in an age where scientists are becoming more specialized and we are developing shorter attention spans (tldr, anyone?). To fix this, we will have to be even better writers. This is especially disheartening to hear, as training in this area is rarely available.
2. For the specific RNAi and mAb vs. polyAb, you probably only need 1-2 extra sentences per method to describe your thinking (a la AtKisson). You don’t need a paragraph for why you choose X and a paragraph for why you reject Y.
3. Relying on “a common technique I teach in all undergrad courses” as a metric for what reviewers should know is not a good criteria. I’ve forgotten a LOT of what I learned in undergrad, mostly because I haven’t used it in my particular field. While this may not be the same for you, it likely is for many reviewers.
4. Last comment. There are two responses to this information. The pragmatic and the idealistic. The idealistic response is fine, but it must be directed at the right audience (policy makers). For further grant applications, until things change, we should adopt a pragmatic approach in our actions/grant writing.
Thanks a lot for your thoughts!
WRT 1: Possibly, but I don’t think it is realistic, at the very least not for me personally. At best, cost/benefit ratio would be too high as I don’t really need to write grants for most of my research (I have enough staff and funding except for exceptional projects), but would require massive training efforts.
2: I’ve indeed already added a sentence each and a reference to allude to the issue, but if someone really doesn’t have a clue, it will be difficult to understand what I mean.
3: There I think I would disagree. Others may not, but I need some objective criteria for where I make a cut-off. If the method is old and established enough to be used across fields and can be found in textbooks, I expect one of three things: 1) The reviewer does not attempt to criticize the method out of ignorance. 2) The reviewer looks the method up 3) The reviewer declines to review. At least that’s what I would do and have done.
4: I’m not sure what you are referring to, sorry.
Sorry for the confusion, please allow me to clarify. I’m from the US, so my (minimal) experience is for mostly NIH funding.
1. I did grad work in a lab in a med school, which meant that grant $ was req’d for almost anything we did. It sounds like getting grants is not as critical for you.
3. My expectations for the reviewer (again, US experience) would be either your #1 (doesn’t criticize) or a #4, criticizes despite ignorance. I’m not sure I’d “gamble” a grant submission hoping for #2 or #3 (especially #2; reviewers are busy). While your assumptions are nice and the way things “should” work, you are taking a calculated risk.
4. I was referring to what one’s response to this kind of review should be. If you are not so dependent upon grants, you have the luxury of taking an idealistic approach and lobbying the right people to try and change how the system works. “Soft-money” researchers in the US don’t have that luxury, and must change how they approach the system (being more pragmatic, adopting a more sales-like approach, putting in the time to develop grantsmanship skills, etc.).
1. Yes, I don’t really need grants here, I can get by with what I have. But this is not only an exceptionally expensive project for me, it was also intended to keep the project of a graduate student afloat who had other funding until now. That student is now unemployed.
3. I’d tend to think that option #4 is essentially unethical: if you’re not sure, you ask a question. I’ve done that and the funder actually forwards such questions to the applicant before the decision. Obviously, if the reviewer is incompetent and unaware of it (a classic Dunning-Kruger https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect), then you just had bad luck 🙂 Those have been comparatively rare in the last decade of my experience.
4. Oh, now I get it, sorry. Yes, a more toned-down version is going to the funder (I immediately called them when I first received the review) and they are very likely not using this reviewer again. This is different in the US and again different for those on soft money there, you are very correct.
Yes, expert reviewers are rarer every second. On the other hand grant applications rising as well as new research areas. This is a system that will surely diverge in the coming years.
My suggestion would be that grant agencies implement a two-phase approach, firstly evaluating the potential broad reach and social impacts by non expert reviewers, and subsequently by expert reviewers, judging the technical content and feasibility.
That’s a very reasonable and absolutely doable approach! Great suggestion!