Last week, I spent two days at a symposium entitled “Governance, Performance & Leadership of Research and Public Organizations“. The meeting gathered professionals from all walks of science and research: economists, psychologists, biologists, epidemiologists, engineers, jurists as well as politicians, university presidents and other leaders of the most respected research organizations in Germany. It was organized by Isabell Welpe, an economist specializing in incentive systems, broadly speaking. She managed to bring some major figures to this meeting, not only from Germany, but notably also John Ioannidis from the USA or Margit Osterloh from Switzerland. The German participants included former DFG president and now Leibniz president Matthias Kleiner (the DFG being the largest funder in Germany and the Leibniz Association consisting of 89 non-university federal research institutes), president of the German Council for Science and the Humanities, Manfred Prenzel, Secretary General of the Max-Planck Society Ludwig Kronthaler, or the president of Munich’s Technical University, Wolfgang Herrmann, only to mention some of them. Essentially, all major research organizations in Germany were represented with at least one of their leading positions, supplemented with expertise from abroad.

All of these people shape the way science will be done in the future either at their universities and institutions, or in Germany or around the world. They are decision-makers with the power to control the work and job situation for tens of thousands of current and future scientists. Hence, they ought to be the most problem-solving oriented, evidence-based individuals we can find. I was shocked to learn that this was an embarrassingly naive assumption.

In my defense, I was not alone in my incredulity, but maybe that only goes to show how insulated scientists are from the political realities. As usual, there were of course gradations between the individuals, but at the same time there seemed to be a discernible grouping in what could be termed the evidence-based camp (scientists and other professionals) and the ideology-based camp (the institutional leaders). With one exception I won’t attribute any of the instances I will recount to any particular individual, as we better focus on the solutions to the more general prohibitive  attitude, rather than on a debate about the individuals’ qualifications.

On the scientific side, the meeting brought together a number of thought leaders detailing how different components of the scientific community perform. For instance, we learned that peer-review is quite capable of weeding out obviously weak research proposals, but in establishing a ranking order among the non-flawed proposals, it is rarely better than chance. We learned that gender and institution biases are rampant in reviewers and that many rankings are devoid of any empirical basis. Essentially, neither peer-review nor metrics perform at the level we expect from them. It became clear that we need to find solutions to the lock-in effect, the Matthew effect and the performance paradox and to some extent what some potential solutions may be. Reassuringly, different people from different fields using data from different disciplines arrived at quite similar conclusions. The emerging picture was clear: we have quite a good empirical grasp of which approaches are and in particular which are not working. Importantly, as a community we have plenty of reasonable and realistic ideas of how to remedy the non-working components. However, whenever a particular piece of evidence was presented, one of the science leaders got up and proclaimed “In my experience, this does not happen” or “I cannot see this bias”, or “I have overseen a good 600 grant reviews in my career and these reviews worked just fine”. Looking back, an all too common scheme of this meeting for me was one of scientists presenting data and evidence, only to be countered by a prominent ex-scientist with a “I disagree without evidence”. It appeared quite obvious that we do not seem to suffer from a lack of insight, but rather from a lack of implementation.

Perhaps the most egregious and hence illustrative example was the behavior of the longest serving university president in Germany, Wolfgang Herrmann, during the final panel discussion (see #gplr on Twitter for pictures and live comments). This will be the one exception to the rule of not mentioning individuals. Herrmann was the first to talk and literally his first sentence was to emphasize that the most important objective for a university must be to get rid of the mediocre, incompetent and ignorant staff. He obviously did not include himself in that group, but made clear that he knew how to tell who should be classified as such. When asked which advice he would give university presidents, he replied by saying that they ought to rule autocratically, ideally by using ‘participation’ as a means of appeasing the underlings (he mentioned students and faculty), as most faculty were unfit for democracy anyway. Throughout the panel, Herrmann continually commended the German Excellence Initiative, in particular for a ‘raised international visibility’ (whatever that means), or ‘breaking up old structures’ (no idea). When I confronted him with the cold hard data that the only aspects of universities that showed any advantage from the initiative were their administrations and then asked why that didn’t show that the initiative had, in fact, failed spectacularly, his reply was: “I don’t think I need to answer that question”. In essence, this reply in particular and the repeated evidence-resistant attitude in general dismissed the entire symposium as a futile exercise of the ‘reality-based community‘, while the big leaders were out there creating the reality for the underlings to evaluate, study and measure.

Such behaviors are not surprising when we hear them from politicians, but from (ex-)scientists? At the first incidence or two, I still thought I had misheard or misunderstood – after all, there was little discernible reaction from the audience. Later I found out that not only I was shocked. After the conference, some attendees discussed several questions: Can years of leading a scientific institution really make you so completely impervious to evidence? Do such positions of power necessarily wipe out all scientific thinking, or wasn’t all that much of it there to begin with? Do we select for evidence-resistant science leaders or is being/becoming evidence-resistant in some way a prerequisite for striving for such a position? What if these ex-scientists have always had this nonchalant attitude towards data? Should we scrutinize their old work more closely for questionable research practices?

While for me personally such behavior would clearly and unambiguously disqualify the individual from any leading position, relieving these individuals from their responsibilities is probably not the best solution. Judging from the meeting last week, there are simply too many of them. Instead, it emerged from an informal discussion after the end of the symposium, that a more promising approach may be a different meeting format: one where the leaders aren’t propped up for target practice, but included in a cooperative format, where admitting that some things are in need of improvement does not lead to any loss of face. Clearly, the evidence and the data need to instruct policy. If decision-makers will be ignoring the outcomes of empirical research on the way we do science, we might as well drop all efforts to collect the evidence.

Apparently, this was the first such conference on a national level in Germany. If we can’t find a way for the data presented there to have a tangible consequence on science policy, it may well have been the last. Is this a phenomenon people observe in other countries as well, and if so, how are they trying to solve it?

(Visited 52 times, 52 visits today)
Share this:
Posted on  at 21:50