A dedicated, non-olfactory mushroom body sub-circuit mediates the interaction between goal-directed actions and habit formation in Drosophila 908 downloads0.00 KB
There are those who demand journal peer-review be paid extra on top of academic salaries. Let’s have a look at the financials of that proposal.
The article linked above confirms common rates of academic consulting fees, i.e., anything between US$100 per hour for graduate students and US$350 per hour for faculty. Taking a conservative US$200 as an easy, lower-bound estimate for, say, a post-doc hour seems to cover most cases. Clearly, many academics, especially from high-income countries, will refuse to work for such a low payment. In order to mitigate against potential exploitative developments, US$200 appears more like a bare minimum.
How long does peer-review take? A recent article examined this question in great detail and concluded that one round of peer-review takes an average of about 6h and that an average published article requires about three such reviews, i.e., 18h of peer-review per published article, or an additional cost of US$3600 per published article, if this peer-review were adequately paid.
We also know that pure publication costs of an average article are about US$600 and non-publication costs accrue to about US$2200:
Taking all of the above together, the total cost of an average peer-reviewed journal article would increase from about US$2,800 now to about US$6,400 with adequately paid peer-review. Add to that a conservative profit margin in this sector of around 30% and the average price for a peer-reviewed journal article would come to US$8,320.
Compared to now, paying peer-reviewers adequately would stand to more than double the price of an average article from ~4k to >8k and increase publisher profits from now 1.2k to nearly 2k per article. Those are the figures one needs to take into account in this discussion.
Or let’s think a a few sizes smaller and imagine a renowned pulmonologist taking the, say, “Marlboro Endowed Chair” in the Mayo Clinic’s Pulmonary Medicine Division, sponsored by Philip Morris. What would they have to endure and how much credibility would they and the institutions lose? Undoubtedly, the foul stench of corruption and hypocrisy would be difficult to counter. Which is probably why such hypotheticals appear more like April Fools jokes than realistic possibilities. Who in their right mind would do such a thing?
And yet, in times when news and parody become increasingly difficult to discern, something comparable has happened in Germany. Two weeks ago, Heinz Pampel has announced that he is leaving his post as the “Open Science Officer & Assistant Head of Helmholtz Open Science Office” to take the position of an Elsevier-funded professorship with the Einstein Center for Digital Future at the Humboldt University Berlin. In his many years working towards Open Science, Dr. Pampel had gained visibility though his numerous scholarly articles on the topic, became a member of countless committees and working groups for various Open Science initiatives, received grant funding for Open Science projects and was a sought-afterexpert for interviews on Open Science.
(UPDATE, 25-11-23: Little did I know at the time of this writing that such a person actually existed before Pampel: Derek Yach headed the “Foundation for a Smoke-Free World” that had a single sponsor: Philip Morris. Like Pampel, Yach claimed the foundation were entirely independent, yet nobody believed him and nobody wanted to be associated with his foundation. He was ousted in 2021.)
Dr. Pampel’s new position had been announced in 2017 as “based on the existing cooperation between the Humboldt University Berlin and the Humboldt-Elsevier Advanced Data and Text Centre” (HEADT) and he is the second person to sit in this chair, succeeding Rebecca Frank. Was this supposed to be an April Fools joke just before Christmas?
What makes this already quite bizarre story even more concerning, is one of the initiatives Dr. Pampel is active in. A few weeks ago, he was lead author on an important document arising from one of the working groups of the German Alliance of Research Organizations. The document contained guidelines for all German research organizations and consortia (notably DEAL) on how to negotiate “transformative agreements” with academic publishers such as Elsevier. One new aspect in these guidelines that was not present in previous guidelines, was eagerly awaited: The section on how negotiators ought to handle the tracking technology that the publishers have been using for the last ten years to monetize the personal data of academic users. Last year had seen a bombshell publication from the main German funder (DFG), decrying the practice for threatening academic freedom (a constitutional right in Germany) among a list of other concerns. This publication triggered a wave of interest and of course broad condemnation of the publishers. Just a few days ago, the topic “science tracking” was also featured in the German newspaper FAZ.
One major goal for the Open Science movement is precisely to push back against the privatization of science as a public good. Tracking scientists in real time, their behavior and their momentary research interests seemed like a raised middle finger: “we can privatize you if we want and there is nothing you can do about it!”. So the question on everybody’s mind for the last year was how to get the corporations to stop tracking science? The expectations were high when the guidelines finally came out. However, early reactions from some experts can best be described as ‘underwhelmed’: the guidelines seemed relatively lax and non-committal, compared to the discussions and widespread condemnations the surveillance had received in the meetings leading up to this document and in the broader community. Renke Siems, who was one of the first to raise the alarm about the publishers’ surveillance practices in Germany, called the guidelines “helpless” and “inadequate“. Other comments from the community were even less generous. It raises some serious questions, when just a week later it becomes known that one of the lead authors had applied for an Elsevier-funded position while they were drafting the guidelines.
Dr. Pampel doesn’t have the global attention of a Greta Thunberg, of course. Nevertheless, it is fair to assume that he is probably at least as known and respected in the German Open Science Community as a renowned pulmonologist could be for German lung cancer research. He didn’t receive the lead author position on the Allianz guidelines for nothing. The volume of such contracts in Germany is about 200 million € per year, so this is not a document concerning peanuts, either.
In his work towards Open Science, Dr. Pampel always strove for reconciliation and was not known for openly attacking the publishers. So while Dr. Pampel’s position may have been to at least try and work with the corporations rather than against them, it never seemed in doubt that he was on Team Scholarship. The Team Scholarship that values the public good over profit, that values the needs of society and science over those of corporations. The fact that of all the corporations involved in academia, Dr. Pampel has decided to now side with the single one that like no other stands for investing billions of $/€ over decades to flagrantly and unapologetically oppose everything Team Scholarship strives for, just reeks of hypocrisy, even betrayal – no matter what he tries to say to defend his decision. One can easily imagine the glee of Elsevier about their latest acquisition. Whether and to what extent the Berlin Einstein-Center/Foundation is also funded by Elsevier, is currently subject to a freedom of information request.
//This post was originally published in German, on the blog of Jan Martin Wiarda.
I just sent the poster for this year’s Society for Neuroscience meeting to the printer. As our graduate student is preparing his defense and our postdoc did not get a visa (no thanks, US!), we just have a single poster this year and I will present it myself on Monday, November 14, 2022, 8:00 AM – 12:00 PM, on poster board WW53.
Contrary to our plans, this poster will not just contain the work of postdoc Dr. Radostina Lyutova, but also some PhD thesis results from graduate student Andreas Ehweiner. The general topic is how animals learn from the consequences of their actions. To this end, we train Drosophila fruit flies in different situations, with or without genetically manipulated nervous system. In the first situation, the tethered animals control a punishing heat beam with their yaw torque (roughly corresponding to turning attempts around the vertical body axis). In such experiments, wild type animals form a motor memory after 8 minutes of training. We have called the type of learning giving rise to this memory self-learning, because the animals are exclusively learning about their own behavior. This is in contrast to a second, very similar experiment, where we have made one seemingly small, but highly consequential modification. In this ‘composite’ situation, the animal is controlling the heat as before, but now the environment of the fly is colored in either green or blue, depending on the attempted turning direction. In other words, now the animal can not only form a motor memory about the un/punished turning directions, it can also learn which color is associated with heat. We call this form of learning world-learning, because the animal is learning about relationships in the world around it.
More then ten years ago now, we developed tests to probe which learning mechanism is engaged under which circumstances.
For instance, both after yaw torque training (where the self-learning mechanism is engaged) and after composite training, we test for torque preference (i.e., whether a motor memory has been formed) without colors. After eight minutes of torque training, wild type flies show motor memory, while after eight minutes of composite training, no such motor memory can be detected. We are hypothesizing that the world-learning mechanism is engaged in composite training and inhibits the self-learning mechanism. Without colors, the self-learning mechanism is free to operate, so motor memory is formed during torque training.
On the poster, we now show the most recent results on the molecular components of the self-learning mechanism and how the world-learning mechanism inhibits self-learning. We found that the gene for the atypical protein kinase C (aPKC) is necessary for self-learning and that it likely acts in motor neurons that also express the FoxP gene. When we overexpress a constitutively active form of aPKC, this improves self-learning so much, that such flies not only can form a motor memory after 4 minutes of training (not sufficient for learning in wild type flies), but also can overcome the inhibition of motor learning by the world-learning mechanism, such that motor memory is detectable after eight minutes of composite training. Because we knew that this inhibition acts via neurons in the neuropil called the mushroom body, we also screened mushroom body output neurons (MBONs) to find out where the inhibition is transmitted from the mushroom bodies. We silenced these MBONs one category at a time, looking for lines which showed motor memory, similar to the flies with the improved self-learning. we found three candidate lines and the rescreen of these lines is still in progress.
The poster can be downloaded by clicking on the picture below:
Wikipedia defines ‘embezzlement‘ as “the act of withholding assets for the purpose of conversion of such assets”. Google defines it as “misappropriation of funds placed in one’s trust”:
If one takes the position that researchers at public institutions are entrusted with public funds to spend on research in the public interest, then researchers spending public funds on something that mainly benefits them personally rather than the public, may be considered in dangerous territory. The point here cannot be to make a legal case. This exercise is more of an attempt to analyze if current or future researcher publication practice can be ethically condoned, broadly speaking.
So let’s look at the current publication practice of researchers. Due to the traditional reward structure, researchers aim to publish in the most prestigious journals, in order to benefit from that prestige in tenure, hiring and promotion decisions. In subscription times, in which we still partially live, this practice does not come with immediate changes in the cost/pricing structure. However, this picture changes dramatically when Open Access publications are considered, where the journals demand payment of an article processing charge (APC). It has been documented exhaustively over several studies that these APCs scalewith journalprestige. This situations provides incentives for authors to choose the most expensive publication option and there are two studies that have found such effects already:
authors choose to publish in more expensive journals
higher APCs were actually associated with increased article volumes
One argument used in favor of publications in prestigious journals is article ‘quality’: research in prestigious journals is often considered of higher quality. However, that notion is not supported by the evidence which suggests that prestigious journals struggle to reach even average reliability. This leaves current author practice mainly benefiting the personal careers of researchers at the expense of the public purse.
Taken together, in a future world where every published article is financed by APCs, all else remaining equal, there will be strong incentives for authors to spend more public funds in order to benefit their own careers. Inasmuch as the excess public funds spent on prestigious journals support the spread and attention to unreliable science, this additional damage caused by this practice likely exceeds the pecuniary damage.
Legal scholars will have to weigh in on whether spending public funds exclusively for career purposes in the way described would actually constitute embezzlement in the legal sense. However, even if that bar is not met, incentives to maximize the expenditure of public funds are never advisable. There are already too many such incentives in research (e.g., overhead, grant funds in hiring, tenure, promotion, etc.). It does not seem wise to add yet another. Eventually, the scholarly community is accountable to the public. Accumulating incentives for wasting the public funds the scholarly community has been entrusted with, does not seem like an advisable strategy.
The development of the internet by scholarly institutions should have been the opening bell for a broader thought process within scholarly societies of how this new technology may revolutionize scholarly discourse. After all, the purpose and mission of scholarly societies is the building and maintenance of communities and fields of study. Nobody was more aware of that than Henry Oldenburg when he founded the first scientific journal for his society in 1665, the “Philosophical Transactions of the Royal Society”. Granted, with the journal lagging the invention of movable type and the Gutenberg-type printing technology by about 200 years, Oldenburg wasn’t really taking advantage of what one would call modern technology, but his Royal Society was only founded in 1660, so he clearly reacted quickly to the needs of his members.
The same cannot be said of his successors in today’s many scholarly societies. Prioritizing revenue, it was not lost to them that technologies such as email and browsers would allow them to copy their mail and journal distributions onto a digital format, at huge savings. Judging from their public messaging since then, for the exact same financial reasons, they have refused to invest even the smallest amount of innovation or progressive thought into what broader opportunities internet technology might provide for their mission going forward. Instead, it looks as if all of that intellectual energy flowed into conserving outdated concepts and demonizing digital progress into a threat for every scholarly society’s revenue. For nearly three decades now, scholarly societies, collectively, appear to have been preoccupied with looking back, at the expense of looking forward. Not even after 2006, when the term “social media” should have provided an etymological prompt even for the dimmest of professional “society” administrators, was there a change of direction. Apparently, the best some of our societies can do these days is installing forum technology from the late 1990s and call it “community“.
Much has been written about the consequences this reactionary attitude has had on the public accessibility of the scholarship these societies have been publishing. Here, I would like to speculate on perhaps much more pernicious consequences.
Ever since the infamous “Flame Wars” in the Usenet/Newsgroups of the 1990s (and probably already before that), one aspect of online discourse had become obvious for nearly any user: online discourse can (and sometimes inevitably will) escalate rapidly into one that nobody would call ‘civil’ any more. Passionate debate is something scholars have been actively participating in and contributing to for centuries. From early rivalries of ‘gentleman’ scientists, via back-and-forth publications of journal articles, editorials or commentaries to later antagonism between authors, editors and reviewers, there have been numerous and varied occasions for learning how to pursue reasoned discourse in a productive, scholarly way. It is impossible to predict which direction online discourse would have taken, or which functionalities current social media would have implemented, had scholarly societies perceived online discourse as an opportunity for their mission rather than a threat to their revenue. With scholarly communication far from being perfect, it is also not clear whether academics ever were in any position to claim superior knowledge. However, it also seems unlikely that early, systematic and competent engagement by scholarly communities with the goal to facilitate scholarly discourse in a way that minimizes the chances of escalation and radicalization could never have changed the course of history in any conceivable way.
Looking back 30 years, it is difficult the escape the impression that civic discourse has become less civil. The road from alternative facts to online mobs to death threats and real world political violence and bloodshed has shortened significantly. Radicalization of large sections of voters and a more general drifting apart of political positions over time is not specific to the internet age, but it appears to be facilitated by social media giving marginal groups or ideas an audience they lacked before. The capability of bringing marginal ideas to a broader audience cuts both ways: ideas that modern society benefits from can be amplified just as much as those it thought it had better left in the dustbin of history. Scholarly debate is all about deciding which hypotheses and theories should be kept and pursued and which should be abandoned. Sifting facts from alternative facts and how to bring the former to prominence and the latter into obscurity is a problem that could have tackled 30 years ago, if the new internet had been a focus of thought for the societies who claim to exist to foster scholarly communication, rather than use scholars to generate revenue to pay their staff. In order to be productive, all scholarly discussions need to be as passionate as necessary to be engaging and thought-provoking, but also as civil as possible to not provoke anger and aggression. Individual personalities vary and this topic comes up in various online discussions time and again, but collectively it appears there is a fairly broad Goldilocks zone of scholarly discourse where most scholars feel welcome, comfortable and discussions are productive.
So while nobody can know how civic discourse would look today had scholarly societies leaned less to the green, I’d argue that at least now, 30 years later, would be a good time to stop, take pause and think hard if the luddite path really is the one scholarly societies want to keep marching on. Personally, I’d go even further: I blame the majority of our scholarly societies that in these past three decades they have put their revenue before their mission. This perverse prioritization has not only hampered scholarly communication in general and the accessibility of scholarly articles specifically, it has also caused the scholarly community to miss a window of opportunity where it could have had an outsize influence on civic society at large by influencing the means and rules of communication. Some may argue that this paints an exaggerated picture of an actually very limited influence of the scholarly community beyond academia, but given the role these institutions played in developing the internet, I fear such arguments may be motivated by the desire to deflect responsibility. It is my impression that the collective digital torpor of academia, especially after it had implemented the internet, leaving the reins to corporations and political activists, is partly to blame for the way things are online today. In other words, is it possible that, if scholarly societies had not been so preoccupied with defending their revenue at all costs, they could have been able to assume a central role in the development of social technologies and help shape the way they operate?
This last aspect deserves a final paragraph. The large, influential societies have traditionally replied that their revenue is important either for their mission in general, or for their ECR services, their lobbyism or any of their other activities. Looking at the political situation in the leading scientific nations of the world today and beyond, it appears that very early investment of intellectual energy into the means and rules of public discourse and how to facilitate productive reasoning and rationality, at a time when they were still in their digital infancy, would possibly have accomplished more than all the dollars, euros and pounds poured into campaigns, political lobbyism or awareness weeks combined. As such, scholarly societies have not only failed their members and scholarship writ large, they have also missed a golden opportunity to maybe help make also the world outside of academia a little bit more evidence-based.
The first conference after the Sars CoV2 pandemic! We’re headed for Paris, France tomorrow and our lab will present two posters, the work of graduate student Andreas Ehweiner and postdoc Radostina Lyutova.
Andreas has been working on the cellular and genetic mechanisms underlying operant self-learning (a form of motor learning). On his poster, he presents evidence that one crucial site of plasticity for this type of motor learning is motor neurons of the fly ventral nerve cord co-expressing both FoxP and aPKC. Interestingly, FoxP expression in the brain seems to be dispensable for self-learning and knocking-out FoxP in all neurons in the adult only seems to show an effect on learning 14 days after the knock-out. Overexpressing a constitutively active form of aPKC improves motor learning, such that these flies still learn when training has been reduced to a level where wild-type flies no longer show a significant learning score. I’m particularly happy that we now seem to have found which of the five Drosophila PKC genes is involved in operant self-learning. We had been trying to identify the right PKC for a few years now. Surprisingly, it is not the same PKC as the one that is required for self-learning in Aplysia, for instance. Andreas is writing up his thesis at the moment, so he sure would appreciate job offers. Andreas is smart, resourceful and persistent. Here is his poster (link opens A0 PDF):
Radostina studies the regulation of self-learning by other forms of learning. Operant self-learning only requires eight minutes of training (see Andreas’ poster) when no other stimuli can be learned. If such stimuli are present, self-learning requires overtraining of 16 minutes. This procedure is reminiscent of habit formation in vertebrates. Radostina found that habit formation in flies is regulated by a single cell receiving input from a prominent insect neuropil, the mushroom body. If she inhibits this cell, flies show a habit already after eight minutes, suggesting that the inhibition of self-learning is mediated via this mushroom body output neuron (MBON 02). Connectome analysis revealed that MBON 02 input is provided mainly by Kenyon cells from the little-studied lateral and dorsal accessory calyx, which, in turn, receive thermosensory and visual input, respectively. This has been a massive effort over several years, clouded by a case of sabotage, that Radostina has picked up and breathed new life into. Here is her poster:
Like this:
LikeLoading...
Share this:
Posted on July 8, 2022at 09:37Comments Off on Off to Paris for #FENS2022 with two posters
The market power of academic publishers has been a concern for all those academic fields where publication in scholarly journals is the norm. For most non-economist researchers, the anti-trust aspects of academic publishing are likely confusing and opaque.
For instance, libraries and consortia are exempted from organizing tenders for their publication needs as each article exists only in one journal with one publisher. This is called the single or sole source exemption from procurement law and essentially means that academic publishers have monopolies on each of their articles and hence each of their journals.
At the same time, this conglomerate of monopolies is often referred to as the “publishingmarket“, where there is market consolidation or concentration, leading up to an “oligopoly“.
So which is it now, a market with competing providers or a conglomerate of monopolists?
The distinction lies in the perspective taken. From the perspective of libraries and readers (the demand side), neither publishers nor journals can be substituted and so the publishers appear, effectively, as monopolists. From the perspective of publishers (the supply side), they offer content across disciplines and effectively compete for the money available in the sector. However, as the libraries and readers do not have any choice but strive to achieve maximal coverage of the literature, this competition does not depend on the quality of the products. Instead, publishers devise pricing strategies trying to obtain the largest piece of the pie. The notorious “Big Deal” bundles, where publishers aim to sell increasingly large journal packages, irrespective of the use of the journals for the institution, are an example of such a pricing strategy.
In a sports analogy, one could see a sector with demand-side substitutability as a gymnastics competition, where gymnasts (companies) aim to present sets that are more difficult and contain less mistakes (high quality products) than their competitors, in order to gain points (market share) from the judges (customers). An academic publishing sector with only supply side competition is more like a hot dog eating competition, where libraries and readers would be the hot dogs.
Any analogy fails at some point, so probably more fitting are statements from experts in the field. The European Union has, of course, been very active in this area. For instance, a crucial aspect of market regulation is merger control. In a document from 2001, Dr. Atilano Jorge Padilla reports to the European Commission on “The role of supply-side substitution in the definition of the relevant market in merger control” and writes:
There is a wide consensus among competition authorities, legal experts and economists about the need to refer to demand-side substitutability for defining relevant markets. The same unanimity, however, does not exist in connection with supply-side substitutability. On the contrary, there seems to be substantial controversy as to the relevance of supply-side constraints for market definition
This is consistent with subjective experience for any customer: the lack of choice is a defining aspect of planned economies as opposed to market-based economies where consumers always choose among competing products. It thus makes no sense to call a sector where there is no supply-side substitutability a ‘market’.
Since the EU common market aims to protect consumer choice, EC market reports are an excellent source for professional analyses of the academic publishing sector. Over a number of years now, various documents have been produced that corroborate the lack of demand-side substitutability and hence the characterization of the academic publishing sector as a conglomerate of monopolies, rather than a proper market.
From a demand-side point of view, it is rare that two different publications be viewed as perfect substitutes. There usually are differences in the coverage, comprehensiveness and content provided by two different publications. From the point of view of functional interchangeability, two different publications could hardly be regarded as substitutable by the end-users, the readers.
or
Consumers will rarely substitute one publication for another in reaction to their relative prices. In this case, a strict demand approach would lead to the definition of a multitude of relevant markets of imprecise boundaries and small dimensions.
substitution possibilities across journals are limited, so that publishers do have significant market power.
or
Since researchers do not see the various publishers as good substitutes and need access to all good journals, consortia only introduce a relatively weak ‘buyer-power’ counterpart to the rising concentration in the publishing market.
Also in 2015, again on the occasion of a planned merger, the EC confirmed their earlier findings that
from a demand-side point of view, it is rare that two different publications can be viewed as perfect substitutes, as there are differences in the coverage, comprehensiveness and content provided. Therefore, in terms of functional interchangeability, two different publications could hardly be regarded as substitutable by the end-users, the readers. On that basis, the Commission found that consumers will rarely substitute one publication for another following a change in their relative prices and concluded that a strict demand approach would lead to the definition of a multitude of relevant markets of imprecise boundaries and small dimensions.
or
Publications in different academic subjects are indeed not substitutable from the readers’ perspective.
Taken together, over almost 20 years now, the EU has consistently come to the conclusion that academic publishing is not a market in the sense that it does not provide for customer choice (i.e., demand-side substitutability). All of these consistent documents and the fact that publishers obviously enjoy the single/sole source exemption, serve to fully justify the term ‘monopolists’ when speaking of academic publishers.
It is therefore not surprising, if now, after so many decades of established fact, scholarly organizations such as the German Council for the Sciences and Humanities also come to the conclusion that
academic publications are a unique, non-substitutable commodity. […] a journal title can give a publisher a non-competitive market position […] Functioning, competition-driven market structures do not exist
Inasmuch as current developments may serve to, one day, establish demand-side substitutability, e.g., by transforming academic publishing from a content-based to a service-based sector in the future, it is paramount to emphasize that this must entail that the single source exemption must be dropped and be replaced by proper tender processes, as is standard operating procedure required by law for all other, digital or non-digital products and services where demand-side substitutability exists.
Academia is under attack from two angles, which seems to suggest that we may not have decades to get our house in order.
The first and older of this two-pronged attack comes from politics. Around the world, anti-science movements seek to discredit reason and abolish science. Be it abolishing tenure, doubting the value of publicly funded science or forcing entire institutions to close, authoritarian politicians around the globe strive to stifle and subjugate academia. Creationists, anti-vaxxers, climate-deniers or anti-vivisectionists are supporting and playing this political game, flush with funds from special interests, broadening the political attack on reason. In some cases, the attacks are explicitly launched referencingunreliable science, others come from different directions. Anything that can be used against scholarship will be used. In addition to these focused attacks on science and often scientists, there is a more diffuse attack on facts, aiming to undermine public trust in institutions, with the goal to increase the effectiveness of campaigns designed to promote the personal cult around certain individuals: when facts do not matter any more in what has been called a post-factual world, opinions and feeling remain as the only currency in public debate. The consequences of such long-term efforts can be seen in neoliberal policies pushing the corporatization of universities, entangling scholars around the world in precarious working conditions, suffering from hypercompetition and avoiding anything that could be seen as risky. Increased bureaucracy imposing numbers games combine with tiny salaries and huge workloads to create the vulnerable academic, caught in the iron fists of feudal PIs and professional administrators. The degree to which academics must endure such circumstances varies dramatically between institutions and countries, but as a global phenomenon, they stifle innovation, critique and promote not only questionable research practices, but the active dissemination of “hype and hyperbole“: misinformation, breeding distrust.
A second front was opened about ten years ago now from an entirely different and mostly unanticipated direction. More than just flush with funds, but this time financed by academia herself, academic publishers started (escalated?) their own attack on science by gobbling up and developingdigitalsurveillancetechnologies. To expand the sources of user data, these corporations bought digital tools covering all aspects of academic life, from literature search, data analysis, writing, citing or outreach, all the way to citation analysis for research assessment. These corporations formerly known as publishers are using their expanded digital surveillance network to accomplish two separate goals. First, a copy of the data is aggregated with private data from scholarly users and sold, either to advertisers, to law enforcement agencies not allowed to collect such intrusive data themselves, or to any authoritarian government interested in identifying potential opposition intelligentsia. The second goal is to expand the monopolies they enjoy on scholarly content, to a monopoly on all scholarly services, i.e., the mother of all vendor lock-ins. Packaging all the different tools in a single bundle and selling it to institutions akin to subscription “Big Deals”, would make it impossible for any institution buying such a package to ever switch to a different provider again. An analogy outside of academia would be a merger of Microsoft, SAP, Google and Facebook. There are two corporations so far that are standing ready to deploy such bundles, RELX (parent of Elsevier) and Holtzbrinck (SpringerNature, Digital Science). A related data analytics corporation specializing on scholarly data is Clarivate (Web of Science, ProQuest).
Both onslaughts aim to undermine independent scholarship and subjugate it for special interests, be it political or financial. To some extent, both have been quite successful already. In fact, in the use of journal rank and other citation metrics, the political and financial fronts have closed ranks and are cooperating. The worst outcome of succumbing to these attacks would be the destruction of publicly funded science. At best, loosing on both fronts would entail academia finding itself permanently strapped in neoliberal purgatory, with a vast precariate, cut-throat competition and results nobody can take seriously any more, in other words: Idiocracy.
It is starting to become clear that to defend against these attacks with their multi-faceted consequences all over academia, it will take swift and decisive counter-measures, both within academia and in cooperation with initiatives outside. Inside academia, first and foremost, open flanks must be closed to allow the enemy only the smallest attack surface possible. Productivity and impact metrics have long been known to be both flawed and counter-productive in that they tend to reward unreliable science. Because this has been known for such a long time, calls for a reform of the academic reward system are old and have recently redoubled. Such calls are, of course warranted, justified and appropriate. However, implementation of a replacement may come too late, if these calls are not supplemented with actions that operate on much smaller timescales.
With research assessment and peer-review taking place on all levels from hiring and tenure decisions, promotion, funding and renewal, nearly all scholars are involved, either as participants or as evaluators, often in rotating roles over time. This means that grassroots movements striving for a change in the reward system face the challenge of winning the hearts and minds of every single researcher on the planet. Just how big is this challenge? Citing OECD statistics, the 2018 STM report lists about 7.1 million full-time equivalent researchers globally. Given that many of these will be part time employees and sites like ResearchGate list about 17 million users, adding at least 50% to these 7 million or a total of about 11 million currently active, individual researchers is probably required for a lower bound estimate. Convincing all of them to change how they assess science is not going to happen overnight.
How could one possibly increase the speed by which academia can replace its reward system? A straightforward approach would be to take away the means that prop up the current reward system, forcing academia to come up with a replacement. Ideally, one would want to replace the technical infrastructure that maintains the current reward system and replace it with one that not just necessitates, but facilitates the creation of a new reward system. There are several ways to do this. Last year, ten experts have proposed that regulators and funding agencies generate incentives for institutions to shift funds away from the journals upon which much of the current reward system is based, and towards a modern journal replacement that facilitates the development of novel reward solutions.
Such a replacement is capable of stopping the financial onslaught on science and providing the means to guard data against corporate greed. Inasmuch as the old reward system incentivized hype, hyperbole and questionable research practices, the replacement also helps stem the flow of irreproducible science. As such, academia has the power to set these processes in motion in self-defense. If academia has the collective will remains to be seen. To defend against the broader attack on facts, such actions can ever only be necessary prerequisites that need to be complemented by other initiatives engaging the broader public.
publishers become publication service providers and enter into competition with other providers
This emphasis on competition refers back to the simple fact that as content (rather than service) providers, legacy publishers currently enjoy monopolies on their content, as, e.g., the European Commission has long recognized: In at least two market analyses, one dating as far back as 2003 and one from 2015, the EC acknowledges the lack of a genuine market due to the lack of substitutability:
it is rare that two different publications can be viewed as perfect substitutes, as there are differences in the coverage, comprehensiveness and content provided. Therefore, in terms of functional interchangeability, two different publications could hardly be regarded as substitutable by the end-users, the readers. On that basis, the Commission found that consumers will rarely substitute one publication for another following a change in their relative prices
or
Publications for different academic subjects are clearly not substitutable from the reader’s point of view. Even within a given discipline, there may be little demand side substitution from the point of view of the individual academic between different publications.
As this lack of substitutability is one of the main sources of the problems associated with academic publishing today, not just the German WR, but many initiatives around the globe see increased competition among publishers as key to moving forward.
As much as these aspects appear uncontroversial to the point that one can find similar wordings in many different texts on the topic, what seems to be lacking from many of these texts is one crucial aspect of the practical consequences of competition: procurement rules demand that tenders be held for products and services above a certain value.
Traditionally, the products of publishers have been exempt from such rules by the “single/sole source exemption”: because the journals sold by the publishers constitute monopolies, there is no substitutability, no competition and hence no way to organize a tender. All of this changes, of course, once the goal is to transform academic publishing into a market with substitutability and hence, competition, as per the public statements in the documents surrounding many “transformative agreements”. Once there is substitutability and the value of the transaction exceeds a given limit, procurement rules mandate tenders.
Heeding their assessment of academic publishing as a conglomerate of monopolies if it is based on content, rather than services, the EC opted for a tender when they became a customer with academic publishing needs. The result of this tender is “Open Research Europe” a platform where EU-funded researchers publish for free. Clearly, procurement rules would demand that all institutions follow the example of the EC and stop their negotiations with publishers and hold analogous tenders instead.
One may argue that authors ought to be able to choose their publication venue and I agree, of course. there should not be any restrictions on the choice of publication venue. However, arguably, this does not necessarily entail that the public purse must reimburse authors for even their most extravagant publication choices, if reasonable substitutes exist.
An analogy may help explain the argument: In many areas of science, transportation is needed for small groups of students and faculty to do field research. According to procurement rules, a tender would be organized and the award may, for instance, go to a company that sells electric vans, seating seven passengers and a driver, and offers a ten year warranty. Many would probably agree that the cost of some tens of thousands $/€ for each van is reasonable, given the functionality of the vans and their climate-friendly propulsion. Faculty, however, claim that because these vans do not carry enough prestige, each van must instead be replaced with eight 1930s Rolls Royce Phantom II, such as this one:
Without such prestige, the faculty argue, they cannot work, risk their careers and funding. Arguments that these ancient vehicles are unreliable, unaffordable and dysfunctional are brushed away by emphasizing that their academic freedom allows them to drive whatever vehicle they want to their field work. Moreover, they argue, the price of around one million is “very attractive” because of the prestige the money buys them.
With this analogy, it becomes clear why and how tenders protect the public interest against any individual interests. In this analogy, it is likely also clear that academic freedom does not and should not trump all other considerations. In this respect, I would consider the analogy very fitting and have always argued for such a balance of public and researcher interests: academic freedom does not automatically exempt academics from procurement rules.
Therefore, ten experts advocate a ban on all negotiations with publishers and, instead, advocate policies that ensure that all publication services for public academic institutions must be awarded by tender, analogous the the example set by Open Research Europe and analogous to how all other, non-digital infrastructure contracts are awarded.
tl;dr: Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal.
Some common responses to the proposal to replace the now more than 35,000 peer-reviewed scholarly journals with more modernsolutions are “Why do you want to get rid of peer review?” or “How should we know what to read without journals?”.
Both, of course, imply insufficient engagement with the proposal and its consequences. As for the first question regarding peer-review, there is currently very little evidence as to the cost-effectiveness of peer-review and most comparisons between un-reviewed ‘preprint’ manuscripts and the published articles after peer-review, show few substantial differences. Given the huge costs associated with peer-review, one would expect peer-review effectiveness to increase dramatically by modernizing the way it is organized and deployed. As this conclusion and expectation is not new, the literature is brimming with suggestions for improving peer-review, just waiting to be tested. The proposal to replace journals specifically includes a more effective way to leverage the power of peer-review.
The second question implies two implicit assumptions, namely that the prestige inherent in journal rank carries a useful signal and that this signal is actually being commonly used when making decisions about which portion of the scholarly literature to read. Let’s see if there is evidence supporting these assumptions.
There is some evidence in the literature (review of a subsection) that there indeed is a signal conveying more ‘novel’ or ‘breakthrough’ discoveries in the more prestigious journals. However, this literature also emphasizes that the signal to noise ratio is very poor: it contains many false-positives (articles that turn out to be not particularly novel or ground-breaking) and false-negatives (many novel and ground-breaking discoveries are first published in lower ranking journals). Unfortunately, because of the nature of journal publishing, the full extent to which the false-negatives contribute the noise in the signal cannot be known, as it is commonly not known which submissions have been rejected by any given journal (more about that later). Adding insult to injury, this already noisy signal is then degraded even further by the reliability of the signal being no higher (in fact, slightly lower) than in average journals. This means that more of the published articles in prestigious journals, even those that are novel or signify a break-through, turn out to be irreproducible or contain more errors than articles in average journals. Finally, this weak signal is then bought at a price that exceeds the costs of producing such a scholarly article by about a factor of ten. Thus, in conclusion, the prestige signal is noisy, the science it flags is unreliable and the moneys it draws from the scholarly community exceed the cost of just producing the articles by almost an order of magnitude.
It may thus be of little surprise that one can find evidence that this noisy signal that tends to also flag unreliable science does not seem to be used as commonly by readers as it is sometimes claimed. Unfortunately, there is no direct evidence as to how readers use the scholarly literature, in part, again, because our literature is fragmented into 35,000 different journals. Therefore, currently, we are forced to restrict ourselves to citations as a proxy for reading: one can only cite what one has also read. Using the signal of journal rank (as routinely and reliably measured by impact factor), Lozanoet al. predicted subsequent citations to published articles. In other words, the authors tested the assumption that journal rank signals to readers what they should read and, consequently, cite. When looking at it historically (figure below), this predictive power of journal rank starts surprisingly low, but statistically significant, due to the authors analyzing millions of articles.
As the serials crisis starts to hit in the 1960s/70s and libraries are forced to use the impact factor to cancel subscriptions to low-ranking journals, this predictive power, not surprisingly, starts to increase, if ever so slightly: you really can only cite what you can read. The advent of keyword searches and online access then abolishes this modest increase and today, the predictive power of journal rank on citations/reading appears to be as low as it ever was. Given that citations to articles in prestigious journals are often used by authors to signal the importance of their topic, even these low numbers are probably overestimating the true effect of journal rank on reading decisions. So there seems to be an effect of prestige on citations/reading, but it is very weak, indicating that either only very few readers are using it or each reader is not applying it very stringently. In other words, the low predictive value of journal rank on future citations can be considered a reflection of users realizing that the publication venue contains little useful information when making choices about what to read.
Noticing an apparent disconnect between these citation data and the fact that journal rank is partly based on citations, Pedro Beltrao pointed to a different study, where high impact factor journals, on average, tend to be enriched in papers that will be highly cited in the future, comparing the slice of 1% most highly cited papers. However, these results only superficially seem to be in contradiction to the data presented by Lozano et al. In fact, I would argue that the 1% study actually supports the interpretation that journal rank is only very rarely used as a signal of what to read or cite: because there are more “novel” or “break through” papers in these journals (as detailed above), people will cite these rare 1% papers more often. But that effect hardly changes the probability of the remaining 99% of papers to be cited. These papers are ‘regular’ papers that could have been published anywhere. This evidence thus suggests that readers decide what they cite not based on containers, but based on content: the 1% of articles in prestigious journals that are actually novel and ground-breaking are cited more, and the other 99% are cited just as much as articles in any other journal. In fact, this insight is corroborated when comparing citation distributions between journals.
All of this evidence supports the interpretation that yes, there is a signal in journal rank, but it mainly comes from a very low percentage of the papers (that then, in turn, is less reliable than average) and hardly translates to the vast majority of the papers in prestigious journals at all: the effect is almost gone when you include all papers. Therefore, arguably, the little effect that remains and that can be detected, is likely not due to the selection of the editors or reviewers, but due to the selection of submission venue by the authors: they tend to send their most novel and break-through work to the most prestigious journals. The fact that what we think is our most novel and groundbreaking work is too often also our worst work in other aspects, probably explains why the reliability of publications in prestigious journals is so low: not even the professional editors and their prestigious reviewers are capable of weeding out the unreliability.
Taken together, despite the best efforts of the professional editors and best reviewers the planet has to offer, the input material that prestigious journals have to deal with, appears to be the dominant factor for any ‘novelty’ signal in the stream of publications coming from these journals. Looking at all articles, the effect of all this expensive editorial and reviewer work amounts to probably not much more than a slightly biased random selection, dominated largely by the input and to probably only a very small degree by the filter properties. In this perspective, editors and reviewers appear helplessly overtaxed, being tasked with a job that is humanly impossible to perform correctly in the antiquated way it is organized now.
Unfortunately, given the current implementation of the prestige signal by antiquated journals, it remains impossible to access the input material and test the interpretation above. If this interpretation of the evidence were correct, much would stand to be gained by replacing the current implementation of the ‘prestige signal’ with a more modern way to organize it.
How could a more modern system support a ‘prestige signal’ that would actually deserve the moniker? Obviously, if journals were to be replaced by a modern information infrastructure, only our imagination is the limit for which filters the scholarly community may want to implement. Some general ideas may help guide that brainstorming process: If the use of such ‘prestige’ not only in career advancement and funding, but also in the defense of the current system is anything to go by, there should be massive demand for a prestige signal that was worth its price. Today, this prestige arises from selectivity based on expertise (Nature’s slogan always was “the world’s best science”). This entails an expert-based filter that selects only very few (‘the best’) out of of the roughly 3 million peer-reviewed articles being published each year. Importantly, there is no a priori need to objectively specify and determine the criteria for this filter in advance. In a scenario after all journals had been replaced by a modern infrastructure for text, data and code, such services (maybe multiple services, competing for our subscriptions?) need only record not just the articles they selected (as now) but also those they explicitly did not select in addition to the rest that wasn’t even considered. Users (or the services themselves or both) would then be able to compute track records of such services according to criteria that are important to them.
Mimicking current implementations, e.g., the number of citations could be used to determine which service selected the most highly cited articles, how many it missed and which it falsely didn’t even consider. But why stop at bare citations? A modern infrastructure allows for plenty of different markers for scholarly quality. One could just as well use a (already existing) citation typology to differentiate between different types of citations, one could count replications, media mentions, anything, really, to derive track records by which these services may be compared. Given the plentiful demand indicated by the fervent supporters of prestigious journals, services would compete with each other using their track records for the subscriptions of individual and institutional users, providing for innovation at competitive prices, just like any other service market. Such an efficient, competitive marketplace of services, however, can ever only arise, if the current monopoly journals are replaced with a system that allows for such a market to be designed. If demand was not as high as expected, but such a signal nevertheless desired by some, a smaller, more basic implementation could be arranged on a non-profit, open source basis, funded by the vast savings that replacing journals would entail. One may also opt to hold competitions for such services, awarding prizes to the service that best serves the needs of the scholarly community. The possibilities are endless – but only once the scholarly community finds a way to put itself into a position where it has any power over the implementation of its infrastructure.
Perhaps most importantly, such a modern implementation of the prestige filter offers the promise of actual consequences arising from the strategies these prestige services employ. Today, publishing articles that later turn out to be flawed has virtually no tangible consequences for the already prestigious journals, just as a less prestigious journal can hardly increase its prestige by publishing stellar work. With power comes responsibility and our proposal would allow the scholarly community to hold such services to account, a feat virtually impossible to reach today.