Time and time again, academic publishers have managed to create the impression that publishing incurs a lot of costs which justify the outrageous prices they charge, be that US$11M p.a. for an Elsevier Big Deal subscription or an article processing charge (APC) of US$5,200 for a Nature Communications article. This week, again, an academic publisher, SpringerNature, reaffirmed its readers that they have huge costs that necessitate the price they charge. This time, the publisher repeated their testimony from 2004 that “they have high internal costs” that amount to €10,000-30,000 per published article:
Springer Nature estimates that it costs, on average, €10,000–30,000 to publish an article in one of its Nature-branded journals
However, in their 2004 testimony, where they state figures in a similar range, they explain how they arrive at these numbers:
The £30,000 figure was arrived at simply by dividing the annual income of Nature (£30 million) by the number of research papers published (1,000).
This means that what the publishers are referring to isn’t their costs for publishing at all, it is the price that they charge the public for all of their services.
It is well established that the cost of making an article public with all the bells and whistles that come with an academic article is between US$/€200-500. This is the item one would reasonably call “publication costs”. Because they are so low, this item cannot be the main reason for the price of a typical Nature branded article. SpringerNature performs additional services, some of which are somewhat related to the publication process, other not so much.
For instance, journals such as Nature or Science reject about 92% of all submitted articles. Someone needs to read and reject all of these articles. Such “selectivity” is explicitly mentioned as a reason for the high prices. It is important to keep in mind that this expensive selectivity fails to accomplish any increase in quality and is thus completely ineffective. The entire practice is thus very reminiscent of how potatoes were introduced in France. Nevertheless, the salaries of the employees who reject all these manuscripts are a cost item, effective or not. As it only concerns all the articles not published, it sounds rather absurd to lump this item in with “publication costs”, even though it is sort of related (Non-publication costs? Rejection costs?).
Another cost item is the paywall used to prevent non-subscribers from accessing the articles. Such paywalls can be very expensive, as, for instance, the New York Times is reported to have spent anywhere between US$25-55M for their paywall. Running a paywall to prevent unauthorized users from reading the articles is another cost item. This one, I would argue, is even less related to publishing.
Finally, there are cost items that are completely and rather uncontroversially unrelated to publishing, such as salaries for management, executives or government relations, as well as other costs such as journalism and news services, the latter explicitly mentioned in the recent article.
All of these cost items together make up the ~€10,000-30,000 that are currently being paid for an article in the SpringerNature stable and there are no reasons to doubt this price tag. Importantly, peer-review is not a cost item as the reviewers are paid via their academic salaries and not by the publisher. Authors are also not paid by the publisher, so this is also not a cost item. The person organizing the peer-review is usually one of the persons rejecting all those other manuscripts, so 92% of their salary are already covered by the rejection costs.
If the scholarly community accepts this price as reasonable, it needs to be prepared to explain to the tax-payer, why it is justified to use public funds to pay a private company such as SpringerNature less than ~€500 to publish a scholarly article in one of their journals and then an additional ~€29,500 for cost items such as ineffectively rejecting articles, making sure the ‘published’ articles remain difficult to access for most ordinary tax payers and the salaries of the company’s executives and lobbyists.
Since cOAlition S is asking for recommendations from the community for the implementation of their Plan S, I have also chipped in. In their feedback form, they ask two questions, to which I have answered with the replies below. With more than 700 such recommendations already posted, I am not deluding myself that anybody is going to read mine, so, for the record, here are my answers (with links added that I haven’t added in the form):
Is there anything unclear or are there any issues that have not been addressed by the guidance document?
The document is very clear and I support the principles behind it. The only major issue left unaddressed is the real threat of universal APC-based OA as a potential outcome. This unintended consequence is particularly pernicious, because it would merely change the accessibility of the literature (which currently is not even a major issue, hence the many Big Deal cancellations world-wide), leaving all other factors untouched. A consequence of universal APC-OA is that monetary inequity would be added to a scholarly infrastructure that is already rife with replication issues, other inequities and a dearth of digital functionalities. Moreover, the available evidence suggests that authors’ publishing strategy takes prestige and other factors more into account than cost, explaining the observation of already rising APCs. A price cap is de facto unenforceable, as authors pay any price above the cap, if they deem the cost worth the benefit. Here in Germany, it has become routine in the last decade, to pay any APC above the 2000€ cap imposed by the DFG from other sources. Hence, APCs have risen also in Germany unimpeded in the last ten years. A switch away from journal-based evaluations as intended by DORA also would lead to a change in authors’ publication strategy only after hardly any evaluations were conducted by journal rank any more, a time point decades in the future, given the current ubiquitous use of journal rank, despite decades of arguing against the practice. Thus, the currently available evidence suggests that a switch to universal APC-based OA, all else remaining equal, would likely lead to the unintended consequence of massively deteriorating the current status quo, in particular at the expense of the most vulnerable scholars and to the benefit of the already successful players. Therefore, rather than pushing access to only the literature (not a major problem any more) at all costs, universal APC-based OA needs to be avoided at all costs.
A minor issue is that Plan S does not address any other research output other than text-based narratives. Why is, e.g., research data only mentioned in passing and code/software even explicitly referred to with “external” repositories? Data and code are not second-class research objects.
Are there other mechanisms or requirements funders should consider to foster full and immediate Open Access of research outputs?
Individual mandates prior to Plan S (e.g., Liège, NIH, etc.) have proven to be effective. Especially when leveraged across large numbers of researchers they can have a noticeable impact on the accessibility of research publications. Widespread adoption of these policy instruments is also a clear sign of a broader consensus about what good, modern scholarship entails. However, so far, these mandates have not only failed to cover research outputs other than scholarly publications, some of them have also proven difficult to enforce or contained incentives for APC-based OA (see above). A small change to routine proceedings at most funding agencies today could provide a solution to these problems, prevent unintended consequences and complement Plan S. In support of Plan S, this small change has been called “Plan I” (for infrastructure). The routine proceedings, carried out by most funding agencies today, that would need amending or expanding are the infrastructure requirements the agencies place on the recipient institutions. Specific infrastructure requirements often are in place and enforced for, e.g., applications concerning particular (mostly expensive) equipment. General infrastructure requirements (e.g., data repositories, long-time archiving, etc.) are often in place for all grant applications, but more rarely enforced. Finally, most funding agencies already only consider applications from accredited institutions, which have passed some basic level of infrastructure scrutiny. The amendment or expansion that would have to take place merely expands on the enforcement of the infrastructure requirements to all applications and would need to be specific with regard to the type of infrastructure required for all research outputs, i.e., narratives (often text), data and code/software. Thus, Plan I entails to require institutions to provide grant recipients with the infrastructure for their grant recipients to be able to provide full and immediate Open Access of all of their research outputs (and hence comply with the Plan S principles and not just the implementation).
Here only an abbreviated list of Plan I advantages:
(publisher) services become substitutable
permanently low costs due to actual competition
no author facing charges
desired journal functionalities can be copied
if subscription funds are used for implementation, the demise of journals will accelerate journal-independent evaluations
cost-neutral solutions for data/code
no individual mandates that may violate sense of academic freedom required
technically easy implementation of modern digital properties to all research objects
modern sort, filter and discovery tools replace 17th century editorial/journal system
implementation of social technology that serves the scholarly community
sustainable long-term archiving that becomes catastrophe-proof with distributed technology
permanent, legal, public access to all research objects, with licensing under the control of the scholarly community.
Over the last ten years, scientific funding agencies across the globe have implemented policies which force their grant recipients to behave in a compliant way. For instance, the NIH OA policy mandates that research articles describing research they funded must be available via PubMedCentral within 12 months of publication. Other funders and also some institutions have implemented various policies with similar mandates.
In principle, such mandates are great not only because they demonstrate the intention of the mandating organization to put the interest of the public over the interest of authors and publishers. They also can be quite effective, to some extent, as the NIH mandate or the one from the University of Liège.
At the same time, such individual mandates are suboptimal for a variety of reasons, e.g.:
In general, mandates are evidence that the system is not working as intended. After all, mandates intend to force people to behave in a way they otherwise would not behave. Mandates are thus no more than stop-gap measures for a badly designed system, instead of measures designed to eliminate the underlying systemic reasons for the undesired behavior.
Funder mandates also seem to be designed to counter-act unintended consequences of competitive grant awards: competitive behavior. To be awarded research grants, what counts are publications, both many and in the right journals. So researchers will make sure no competitor gets any inside information too early and will try to close off as much of their research for as long as possible, including text, data and code. Mandates are designed to counter-act this competitive behavior, which means that on the one hand, funders incentivize one behavior and on the other punish it with a mandate. This is not what one would call clever design.
Depending on the range of the behaviors intended to control, mandates are also notoriously difficult and tedious to monitor and enforce. For instance, if the mandate concerns depositing a copy of a publication in a repository, manual checks would have to be performed for each grant recipient. This is the reason the NIH have introduced automatic deposition in PMC. If re-use licenses are mandated, they also need to be tested for compliance. If only certain types of journals qualify for compliance, the 30k journals need to be vetted – or at least those where grant recipients have published. Caps on article processing charges (APCs) are essentially impossible to enforce, as no funder has jurisdiction over what private companies can ask for their products, nor the possibility to legally monitor the bank accounts of grant recipients for possible payments above mandated spending caps. Here in Germany, our funder, the DFG has had an APC cap in place for more than 10 years now and grant recipients simply pay any amount exceeding the cap from other sources.
In countries such as Germany, where academic freedom is written into the constitution, such individual mandates are considered an infringement on this basic right. There currently is a law suit in Germany, brought by several law professors against their university for mandating a deposit of a copy of all articles in the university’s repository. In such countries, the mandate solution is highly likely to fail.
Mandates, as the name implies, is a form of coercion to force people to behave in ways they would not otherwise behave. Besides the bureaucratic efforts needed to monitor and enforce compliance, mandates are bound to be met with resistance by those coerced by the mandate to perform additional work that takes time away from work seen as more pressing or important. There may thus be resistance both to the implementation and the enforcement of mandates that appear to be too coercive, reducing the effectiveness of the mandates.
For about the same time as the individual mandates, if not for longer, funders have also provided guidelines for the kind of infrastructure the institutions should provide grant recipients with. In contrast to individual mandates, these guidelines have not been enforced at all. For instance, the DFG endorses the European Charter for Access to Research Infrastructures and suggests (in more than just one document) that institutions provide DFG grant recipients with research infrastructure that includes, e.g., data repositories for access and long-term archiving. To my knowledge, such repositories are far from standard at German institutions. In addition, the DFG is part of an ongoing, nation-wide initiative to strengthen digital infrastructures for text, data and code. As an example, within this initiative, we have created guidelines for how research institutions should support the creation and use of scientific code and software. However, to this day, there is no mechanism in place to certify compliance of the funded institutions with these documents.
In the light of these aspects, would it not be wise to enforce these guidelines to an extent that using these research infrastructures would save researchers effort and make them compliant with the individual mandates at the same time? In other words, could the funders not save a lot of time and energy by enforcing institutions to provide research infrastructure that enables their grant recipients to effortlessly become compliant with individual mandates? In fact, such institutional ‘mandates’ would make the desired behavior also the most time and effort saving behavior, perhaps making individual mandates redundant?
Instead of monitoring individual grant recipients or journals or articles, funders would only have to implement, e.g., a certification procedure. Only applications from certified institutions would qualify for research grants. Such strict requirements are rather commonplace as, e.g., in many countries only accredited institutions qualify. Moreover, on top of such general requirements, there can be very specific infrastructure requirements for certain projects, such as a core facility for certain high-throughput experiments. In this case, the specifications can even extend to certain research and technical staff and whether or not the core facility needs permanent staffing or temporary positions. Thus, it seems, such a certification procedure would be a rather small step for funders already set up to monitor institutions for their infrastructure capabilities.
If groups of funders, such as cOAlition S, coordinated their technical requirements as they have been coordinating their individual mandates, the resulting infrastructure requirements would include FAIR principles, which would lead to a decentralized, interoperable infrastructure. under the governance of the scientific community. As this infrastructure is intended to replace current subscription publishing with a platform that integrates our text-based narratives with our data and code, it would be straightforward for the funders to suggest that an obvious source of funds for the required infrastructure would be subscriptions. As most scholarly articles are available without subscriptions anyway and implementing the infrastructure is muchcheaper, on average, than subscriptions, the implementation should be possible without disruption and with considerable cost reductions for the institutions. If an institution considers their library to be the traditional place where the output of scholars is curated, made accessible and archived, then there would not even have to be a redirection of funds from library subscriptions to different infrastructure units – the money would stay within the libraries. But of course, institutions would in principle remain free to source the funds any way they see fit.
Libraries themselves would not only see a massive upgrade as they would now be one of the most central infrastructure units within each institute, they would also rid themselves of the loathsome negotiations with the parasitic publishers, a task, librarians tell me, which no librarian loves. Through their media expertise and their experience with direct user contact libraries would also be ideally placed to handle the implementation of the infrastructure and training users.
Faculty would then enjoy never to have to worry about their data or their code ever again, as their institutions would now have an infrastructure that automatically takes care of these outputs. Inasmuch as institutions were to cancel subscriptions, there also would be no free/paid alternative to publish than the infrastructure provided by the institutions, as the cash-strapped publishers would have to close down their journals. Moreover, the integration of authoring systems with scientific data and code makes drafting manuscripts much easier and publication/submission is just a single click, such that any faculty who values their time will use this system simply because it is superior to the antiquated way we publish today. Faculty as readers will also use this system as it comes with a modern, customizable sort, filter and discovery system, vastly surpassing any filtering the ancient journals could ever accomplish.
Taken together, such a certification process would only be a small step for funders already inclined to push harder to make the research they funded accessible, save institutions a lot of money every year, be welcomed by libraries and a time saver for faculty, who would not have to be forced to use this conveniently invisible infrastructure.
Open standards underlying the infrastructure ensure a lively market of service providers, as the standards make the services truly substitutable: if an institution is not satisfied with the service of company A, it can choose company B for the next contract, ensuring sufficient competition to keep prices down permanently. For this reason, objections to such a certification process can only come from one group of stakeholders: the legacy publishers who, faced with actual competition, will not be able to enjoy their huge profit margins any longer, while all other stakeholders enjoy their much improved situation all around.
The second poster is on Monday afternoon, Nov 5, poster number 407.23, board UU1, entitled “CRISPR/Cas9-based genome editing of the FoxP locus in Drosophila“. For this poster, Ottavia Palazzo created several fly lines in which the FoxP gene locus was modified by, for instance, inserting a GAL4 reporter in place of important parts of the gene, creating loss-of-function alleles. Ottavia has created a range of useful constitutional and conditional manipulations and the first characterizations of the first constitutional lines are presented on this poster. Postdoc Anders Eriksson and intern Klara Krmpotic performed some of the behavioral tests and the monoclonal antibodies are being generated in the lab of Diana Pauly with the help of her graduate student Nicole Schäfer. Bachelor student Julia Dobbert helped with some of the molecular work and postdoc Matthias Raß taught and supervised all of Ottavia’s and Julia’s molecular work.
On the occasion of the first “BigDataDay” at our university, I have summarized on the below poster our two main efforts to automate the publication of our tiny raw data.
On the left is our project automating Buridan data deposition at FigShare using the ROpenSci plugin and the consequence of just sending the links to the data and the evaluation code to a publisher, instead of pixel-based figures, when submitting a paper. Most of this work was done by postdoc Julien Colomb several years ago, when I was still in Berlin.
On the right is our ongoing project of automating flight-simulator data deposition with our library. We designed a YML/XML meta-data format loosely based on the Frictionless Data standard. Our evaluation script reads a YAML file that contains the experimental design (i.e., which raw data file belongs into which experimental group) as well as formalized commands for the kinds of statistical tests and graphs to be generated. From this information, each experiment (i.e., XML file) is evaluated and a quality control HTML document is written that contains numerous aspects of the raw data to ensure the quality of the data in each experiment. The same information from the YAML file is used to compute an evaluation HTML document with group-wise evaluations. All the raw data and evaluation files are linked with each other and the XML files link not only to the repository with the evaluation script, but also to the repository with the software that collected the data and the data model explaining the variables in the raw data files. Ideally by dragging and dropping figures with statistics into a manuscript, published scholarly articles would link to all of the files generated for the publication. A client-side Python script is called upon user login to compare the local project folder and compare it with the folder on the library’s publication server for synchronization.
The recent publication of the “Ten Principles of Plan S” has sparked numerous discussions among which one of several recurring themes wasacademicfreedom. The cause for these discussions is the insistence of the funders supporting Plan S that their grant recipients only publish in certain venues under certain liberal licensing schemes.
Germany is likely among the countries with the strongest protection of academic freedom, as article five of our constitution explicitly guarantees academic freedom. Historically, this has always included free choice of publishing venue. As modern internet technology keeps encroaching on academic traditions, there is now a lawsuit pending at the constitutional court of Germany in which way open access mandates, requiring scholars to deposit a copy of their published article in an institutional repository, violates academic freedom. The lawsuit was started by several law professors with support of the DHV (main organization representing academic interests in Germany) and the publishers’ organization “Börsenverein”. It will be interesting to see if the traditional and widespread mandates on publishing in certain venues (by sanction of unemployment) will be brought before the court.
I have asked the law professors behind the lawsuit on their opinion about the current infringements on academic freedom by employers, but did not receive a reply. An independent question to a practicing lawyer specialized in constitutional law was answered with the current requirements being in principle equivalent infringements on academic freedom, as open access mandates, if they even are considered infringements. However, the lawyer advised to wait until the current ruling has been published and that a suit could only be brought by affected parties. In Switzerland, a law professor recently confirmed the conclusion of the German lawyer that both funder mandates and employer requirements can be considered equivalent. In this case, his conclusion was that neither infringe on academic freedom.
Thus, even in Germany, a country with arguably one of the strongest, constitutional protections of academic freedom, it is far from certain if any/all requirements for publication venue may constitute an infringement on these constitutional rights. Ongoing legal proceedings will help clarify this question. As a non-lawyer I would tend to argue that in case open access mandates are considered as violations of academic freedom, also the requirements to publish in certain journals must fall. Conversely, if current practice is no infringement, neither are open access mandates.
For the main argument of this post, however, let’s assume both funder and employer mandates were considered an infringement of academic freedom, i.e., the German constitutional court bans all and every policy to push academics into publishing in certain places, whether that be funder or employer requirements. Would such a strong interpretation of academic freedom automatically entail that the tax-payer has to fund every possible publication venue an academic might choose?
Let’s imagine some amusing hypothetical scenarios. The scenarios are not meant to be realistic, but exemplify the difficulty to negotiate individual freedoms with responsible spending of public funds:
A group of ecologists makes an exciting discovery about local wildlife and they decide to exercise their academic freedom and publish their discovery by buying billboard space across the region, to alert the general public of the precious finding.
A group of biomedical researchers finds a novel drug target for a deadly disease and they decide to exercise their academic freedom and publish their discovery by publishing it in double page advertisements in major newspapers to make sure every drug maker in the world receives enough information to develop the cure.
A group of social psychologists discovers that cluttered environments promote racist language and theft. They decide to exercise their academic freedom and publish their discovery in a prestigious subscription journal. The collective price of the subscriptions to this journal averages to about 100 times the technical publishing costs of the article. On top of the exorbitant price tag, the journal paywall and policies limit both access to the research and the data upon which the publication rests.
A group of geneticists discovers a new mechanism of inheritance and they decide to publish their results in a prestigious journal. The journal recently flipped from the subscription model to author-pays, ‘gold’ open access. Because their chosen journal is highly selective, which is the basis of its prestige, the author-side fee is 200 times that of the technical costs of publishing the article.
Obviously, all four of these scenarios are ‘batshit crazy’ and nobody in their right mind would even try to defend any of these against tax-payer (or general accounting office) scrutiny or try to align them with university spending rules. And yet, scenario number three is current reality and number four may soon be reality, if Plan S and other such funder policies that support ‘gold’ open access were to become standard practice (see here for more reasons why this aspect could lead to severe unintended consequences).
Arguing from a strong point of academic freedom, would one, therefore, to be consistent, require that all scenarios should be funded by the tax-payer, or none? If all should be funded, where should the funds come from to pay even the most outrageously extravagant venues academics might choose? If none ought to be funded, what are the rules based on which these decisions are to be made?
Squaring constitutional rights with public spending is not an easy task. Since I am no legal scholar by any stretch of the imagination, I would tend to argue with the common notion that my freedom to swing my arms ends at your nose (a saying I learned from Timothy Gowers). Publicly funded institutions commonly have to obey strict spending rules. This has a long tradition as this document from 1942 shows:
The awarding of contracts by municipal and other public corporations is of vital importance to all of us, as citizens and taxpayers. Careless and inefficient standards and procedures for awarding these important community commitments have increased unnecessarily the tax burdens of the public. To secure a standard by which the awarding of public contracts can be made efficiently and economically, and with fairness to both the community and the bidders, the constitutions of some states, and the statutes regulating municipal and public corporations provide for the award of public contracts to the lowest responsible bidder.
Who would argue that academic freedom should exempt academics from such spending rules? On the contrary, shouldn’t these spending rules require public institutions to find the most cost-effective way to fulfill their mission, regardless of what venue academics might prefer to publish in? This latter consequence would entail that buying subscription access to publicly funded scholarly works does not qualify as a cost worth spending public money on. How can institutions escape such violation of their spending rules while simultaneously allowing their faculty to exercise their academic freedom?
Here, I suggest that the current, rational, modern resolution of the conflict between academic freedom and spending rules is to provide academics with a cost-effectivepublishing infrastructure and provide them with the freedom decide whether they want to use it or not. The infrastructure would be maintained by the institutions and either serviced by them or by bidding contractors, as any other infrastructure. Scholars have the choice to either use this infrastructure at no cost to them or find funds to choose any other venue. Given the current abysmal state of publishing functionality, together with the extinction of existing journals without subscription funding, a quite rapid shift in publishing culture would be expected.
As current subscription spending is roughly ten times of what is needed to keep publishing going at current levels, so one would not expect much of a disruption from obeying spending rules also in academic publishing. On the contrary, 90% of current funds would, if these spending levels were sustained, help improve upon current publishing user experience and help innovate to implement a modern infrastructure that services not only our text, but also our data and code.
It’s now been 24 years since Stevan Harnad sparked the open access movement by suggesting in his “subversive proposal” in 1994 that scholars ought to just publish their scholarly articles on the internet:
If every esoteric author in the world this very day established a globally accessible local ftp archive for every piece of esoteric writing he did from this day forward, the long-heralded transition from paper publication to purely electronic publication (of esoteric research) would follow suit almost immediately.
Since then, we have been waiting on the behavior of scholars to change, such that all our works indeed become accessible. This is what has become known as the “culture shift” in academia, without which no actual change in our practice can happen. However, no such change can be seen, not even after all these years. Instead, open access mandates and other policies have been developed to force scholars to perform certain behaviors they wouldn’t otherwise do. Even in fields where such deposition of articles has become common, the authors still adhere to toll-access publishing not for reading or scholarly communication, but for career advancement – an obscenely expensive and perverse outsourcing practice.
Why does such behavioral change take so long? Many of Stevan’s colleagues at the time have since retired and a large section of the scholarly workforce has been replaced with a new generation, one would think – if anything – more net-affine than the previous one?
In this post, I will try to make the argument that our mistake was to expect behavior to change when the reasons for the behavior have not changed. As a behavioral neuroscientist, I have learned that, all else being equal and depending on time-scales, among the best predictors of future behavior is past behavior. Thus, if we analyze why scholars behave the way they do with regards to open scholarship, we may be more likely to affect that behavior.
Why isn’t everybody using preprint servers? What keeps people from posting their data and code on any of the proliferating repositories? What is the reason, funders feel they need to use mandates to get scholars to comply with open science ideals? Why are the non-activist, regular scholars either lethargic or outright hostile?
In the last decade, it seemed as if the answer to this question was “because of the reward system!” or “because incentives are missing!”. As if scholars only ever do anything if they are rewarded or incentivized for it. I think the answer lies elsewhere. It can be articulated as two main reasons:
1) They do not care and hence do not know: scholars care about their scholarship and they shouldn’t have to care about such questions. These questions are exactly what infrastructure should be taking care of, not scholars.
2) They have good reasons to close their scholarship: lack of time, fear of competition, privacy concerns, etc.
I’m simplifying, of course, but would nevertheless tend to argue that together, 1+2 explain most of scholars’ behavior wrt to open scholarship. Those who care, know and do not have good reasons to be closed are people like open scholarship activists, e.g., yours truly. The other 99.5% are the ones who resist “culture change” for either reason 1) or reason 2) or both. I’m rather skeptical anything can make either 1) or 2) go away any time soon, let alone both.
Hence, rather than fighting 1+2, as we have been for two decades now, I suggest to use them in our favor.
A recent poll in our biology/medicine department exemplifies how this might work: when polled which software the department members are currently using to prepare images (microscope images, gel pictures and such) for publication, the majority answered “PowerPoint”. Now, I assume that most everybody on here would understand that PowerPoint is not the, ahem, ideal, most professional software to use for these kinds of work. 🙂 On the contrary, submission in PTTX format is explicitly discouraged by most publishers. This means that the majority of people in our department use a tool that is not the most professional for the task at hand and the format of which is discouraged for submission. What funder mandates could be in place to encourage such odd behavior? Which tenure committee rewards compliant over superior tool use? Where is the academic incentive system that pushes scholars to choose PowerPoint over the better-suited alternatives?
Obviously, there are no mandates or tenure committees incentivizing the use of such suboptimal tools. Scholars are doing this entirely on their own accord. Why on earth would educated people do something like that? The answer is straightforward: for the same reasons 1) and 2) above! Most faculty don’t care and hence don’t know: they use what comes on their computers, pre-installed by the university. Or they have a good reason to use this tool: it’s good enough for them and they don’t have the money for Photoshop or would rather spend the money for experiments than software. Or they find ImageJ too hard to use as they are already familiar with the ubiquitous PowerPoint and can’t be bothered to switch. Or installing new software is just too much of a hassle. Et cetera.
With this example in mind, how do you get scholars to choose open publishing alternatives over legacy publishers? How do you get them to use open evaluation procedures over impact factor? How do you get them to save their data to a repository, rather than on their thumb drive? You provide them, automatically, free of charge and ready to use, with the tools you want them to employ, with the default settings (i.e, open) you prefer. The large majority who doesn’t care and hence doesn’t know will just use what’s convenient, quick and free, so they can focus on what matters most to them: their scholarship. Those who have good reasons to make their work closed will balance these against the potential negative consequences (e.g. more time and effort, potential suspicions if everything else is open, etc.) and be able to make their work as closed as it needs to be for them. Of course, ideally, such tools come with their own reasons why one would want to use them, such as increased efficiency over legacy tools or new, more and better functionalities. Conversely, equally obviously, you stop providing scholars with anything you don’t want them to use, such as subscription journals. Or impact factors. Or typewriters.
Since subscriptions globally run at about US$10bn every year, and the technology for a scholarly commons can be had off the shelf, the kind of modern infrastructure that would get scholars to change their behavior only needs to be bought with the funds saved by subscription cancelations. As such an infrastructure would provide scholars with a superior toolset, it would also add ‘efficiency’ and ‘functionality’ to reason 2) as to why scholars are using this new, open infrastructure.
Notwithstanding the barrage of criticisms and warnings from every corner of the scholarly community, various initiatives, mainly in the Netherlands, Finland, Germany, France and the UK, continue their efforts for a smooth transition from subscriptions to open access without any further disruptions. The underlying idea is to shift the subscription funds to article processing charges (APCs). While most share the sentiment that subscriptions need to go and the saved funds should be used for publishing, rather than reading, this particular approach has drawn widespread criticism.
The main point of contention is whether this approach is sustainable in the long run. There seem to be unanimous expectations of dropping prices with such a transition. These hopes are mainly based on current estimates of APC prices of around 40% below current subscription costs (calculated as cost-per-article). One may argue that a mere drop of 40% is still a gift to legacy publishers when actual publishing costs are more like 90% below current subscription prices, but this shall not be the main point here.
The main question is: even if publishers eventually agree to a 40% drop in revenue (which seems more likely than not at this point, despite public publisher resistance), how can we keep them from increasing prices to old levels? The initiatives mentioned above are notably silent on how this could be achieved. One hears of rigid price-caps and sincere promises of definitely not paying “outrageous prices”, but so far a clear strategy of what is going to happen if a publisher raises the prices above inflation or even by, say, 40% or 70% has not been made public. I haven’t even heard about a strategy being discussed.
As I see it, there are two main positions these initiatives are taking: for one, there are price caps, above which libraries are not supposed to be paying APCs. The second amounts to simply walking away from any renewal of a contract if the prices become too high. Both cases entail that faculty will have to foot at least some of the bill to publish.
At our institution, we have price caps for our APC-fund and people here simply pay either from their budget or out of their pocket, if the APC is exceeding this cap. Given that APCs scalewith journalprestige (Nature Communications being a particularly egregious example with APCs that already exceed average subscription costs per article), it is precisely the most sought-after journals where scholars are most desperate to publish in that will be affected by this policy:
APCs scale with IF, data from two different studies (click for larger picture)
Phrased differently: in the richest institutions, faculty will be able to publish in the most prestigious journals while in the poorer ones, they will have to pay extra for the same glory – or just not be able to publish there at all. In this “APC-capped world”, the rich will stay in science, the poor will have to drop out.
Irrespective of any price caps, with a decades-long publisher track record of double-digit price increases year over year (subscriptions or APCs), every library, then (“serials crisis”) as now, will be faced with budgetary constraints that force them to contemplate dropping the Big Deal for a given publisher. How likely have libraries been to drop subscriptions in such circumstances? Not very, it seems. Despite constant supra-inflation price hikes, libraries have cut other acquisitions, explored additional revenue streams and enlarged their subscription budget over several decades. Faculty demand, it is said, is paramount and hence all publisher demands, no matter how outrageous, have historically been met, until this day, when publisher profits have swelled to 40% and higher. Thus, the fact that faculty need to read has made it virtually impossible for libraries to walk away from subscription negotiations, allowing publishers to essentially charge what they want and getting away with it.
How will this dynamic change once libraries cease to pay for read-access but pay instead for write-access? Without subscription, every article behind a paywall is just a few extra clicks or some wait time further away, than with a subscription. Nevertheless, resistance to subscription cancellations is as high as ever, with some librarians even fearing for their jobs should they cancel subscriptions perceived vital for some of their faculty.
Having lived through several subscription cuts at various institutions, I find this scenario widely unrealistic, but not being a librarian, I have to take such statements at face value. Apparently, if libraries were to cancel read-access through their institutions, forcing faculty to add a few clicks to their reading habits, some of the librarians might risk their jobs. If that is indeed the case, what is a realistic scenario for APC cancellations? What might happen to a librarian if they instead of asking their faculty to spend a few additional clicks for essential articles, blocked their faculty from publishing by not paying APCs anymore? Given the ‘publish or perish’ culture we live in, it is straightforward to assume that should this happen, one very realistic scenario is that of faculty marching on the library with torches, setting fire to the building and tarring and feathering all inside who could not escape.
Obviously, if libraries have extended hyperinflationary subscription Big Deals out of fear of faculty reprisals for decades, they will do so with even more fervor and conviction for Open Access Big Deals, as such cancellations are orders of magnitude worse for their faculty than subscription cancelations.Here is a brief comparison of the subscription scenario, with the Big OA Deal scenario:
Thus, the hopes of transitioning away from subscriptions without some major disruptions are illusionary. In fact, the unintended consequences of such well-meaning efforts will likely be worse than what we have today.
We are looking for a a permanent, full-time technician, arguably the most important position in our laboratory. The main perks that come with the position are that it is permanent and that we are a small group of very enthusiastic colleagues where there is always something different going on. For those so inclined, we also offer the possibility to conduct their own research projects, to the extent the candidate feels comfortable with. There are also no fixed start or end times for our working days: as long as it is in the daytime for the flies, the candidate will be free to schedule their work hours (40.1h per week) for maximum work-life balance. Compensation follows German rules, in this case TVL-E7.
The routine tasks of the position are flexible and limited: maintenance of the Drosophila stock collection, preparation of media, ordering of consumables, preparing flies for behavioral experiments and perhaps some histology or molecular biology every now and then. Support of experimentation is variable and dependent on current projects. This is the area where the most flexibility arises for the successful candidate. Some support of student courses in terms of preparing materials and other technical assistance is expected.
The successful candidate will have an BTA/MTA or an equivalent degree, experience in laboratory logistics and ideally also in insect/Drosophila handling and breeding. Additional experience in IT, molecular biology or other research is advantageous but not required.
While it is helpful to speak German, it is not required. English is the language of our laboratory, so it would be difficult to contribute to our work without speaking the language at least on a conversational level.
In his fantastic Peters Memorial Lecture on occasion of receiving CNI‘s Paul Evan Peters award, Herbert Van de Sompel of Los Alamos National Laboratory described my calls to drop subscriptions as “radical” and “extremist” (starting at about minute 58):
Regardless of what Herbert called my views, this is a must-see presentation in which Herbert essentially presents the technology and standards behind the functionalities I have been asking for and have been trying to get implemented for the last decade or so. Apparently, where we differ is only that I actually want to use the functionalities and concepts he describes in his presentation and, consequently, I am naive or idealistic enough to think of ways to get there. If this makes me a radical, so be it: radix is Latin for ‘root’ and I try to tackle the root of our problems.
Right before he talks about me, he also mentions David Lewis’ 2.5% Commitment, which I also support. In Cameron Neylon’s critique of Lewis’ approach one can find an important realization that bears quoting and repeating as it is one of the main obstacles why Herbert thinks we will never have the tools he describes in his presentation. Cameron writes:
That in turn is the problem in the scholarly communication space. Such shared identities and notions of consent do not exist. The somewhat unproductive argument over whether it is the libraries responsibility to cut subscriptions or academics responsibility to ask them to illustrates this. It is actually a shared responsibility, but one that is not supported by sense of shared identity and purpose, certainly not of shared governance. And notably success stories in cutting subscriptions all feature serious efforts to form and strengthen shared identity and purpose within an institution before taking action.
Cameron very astutely dissects one of the main sociological issues holding us back: scholars do not share a common identity any more, just as librarians and faculty do not and just as different scholarly institutions do not share an identity with each other. So pernicious an effect have the neoliberal mantras of “competition” and “every man for himself” had on scholarship, that it has all but completely disintegrated into either warring factions or competing careerists. University rankings provide a clear metaphor for scholarly institutions as players in a competition for whatever the neoliberal ideologues want them to compete for: funds, human resources or prestige (aka. the scholarly fetish “excellence“). If you talk to current university presidents, deans or provosts or read what they have written, it seems as if most of them have completely absorbed the neoliberal cool-aid and made themselves the defenders of individuality, competition and external ‘incentives’, with the underlying assumption that without those concepts, everybody in academia would just sit in their comfy chairs and collect tax funds, twiddling their thumbs. Apparently, carrots and sticks are the only way to squeeze excellence out of otherwise lazy, selfish and parasitic scholars. Ironically, election data suggest that a large section of these scholarly politicians, if they are representative of their academic peers at large, may go on to vote for left-of-center parties or candidates who vow to combat exactly the neoliberal policies they so ardently defend in their day jobs.
Be that as it may, apparently even for an advocate and expert like Herbert, asking scholars to cooperate in order to achieve a greater, public good, has now become sufficient grounds to label someone who strives for cooperation as a “radical” or an “extremist”. If indeed he is correct and in 2018 asking scholars to behave cooperatively, rather than competitively, is something so exotic and outrageous, scholarship has deserved the state it currently is in.
These thoughts have reminded me of an old cartoon I’ve been showing in many of my presentations. Now, I’m posting a disambiguated version of the cartoon (sorry, I can’t provide a source for the cartoon, created it from a photograph I once was sent) that I hope explains in an entertaining way why dropping all subscriptions and buying Herbert’s solutions from the money instead isn’t extreme at all (click for larger image):
All scholars and those working to support scholars share a common identity. Cameron is spot on in that all too few are realizing that we all strive for better scholarship, for more knowledge. Acquiring knowledge for its own sake is one of the very few behaviors that humans do not share with other animals and all scholars share a particular enthusiasm for knowledge. In fact, the German word for scholarship is “Wissenschaft”, literally translated with “knowledge creation”. In this argument, it doesn’t matter if scholar A is at institution X and librarian B is at institution Y – they are all scholars.
I must assume (not having been there) that this sense of communalism (to use Merton’s term) and shared identity (to use Cameron’s term) must have been much more prevalent in the early 1990s when institutions invested in routers, cables, computers and other hardware (and time!) for something that nobody knew what it could do: the WWW. I often wonder what faculty would have said around 1992 (first time I had an email address), when asked by a computing center employee: “wouldn’t you like a new service, let’s call it ’email’ by which your students could reach you 24/7?”. I would tend to believe that if that had been the mindset of infrastructure experts at the time, we would not have any internet today.
Instead, infrastructure experts at the time embraced the new technology, were competent enough to realize which standards worked and would be sustainable long into the future and started spending some serious money – regardless of whether faculty expressed any interest in using any of this technology. In contrast, today, we stand to save money from adopting the standards Herbert talks about and yet thinking about how to practically achieve adoption of such common standards is grounds for being labeled an extremist. How dare I suggest implementing modern technology without asking faculty first! Today, we have similarly competent experts like Herbert, but they seem to despair, expecting this modern technology to never arrive for scholarship, instead of doing what their predecessors have done: embrace the new technology and the potential it brings and implement it. What a difference 25 years make: the common good was sufficient cause for spending money in the 1990s, when today it is seen as ‘extremist’ just to try and save money while promoting the common good.
Today, librarians and other infrastructure experts dare not implement modern technology without fear of reprisals: after all, faculty are not colleagues any more who share a common identity, they are customers and librarians are service providers in this corporation only called ‘university’ for dusty historical reasons. Clearly, single institutions cannot act without risking league table standings or the competitiveness of their labor force. Everyone is busy chasing prestige in an absurd artificial competition where “excellence” is the only thing that counts, but can’t itself be counted. Some of Monty Python’s most absurd sketches appear rational in comparison.
When done competently, dropping subscriptions today doesn’t risk anybody’s livelihoods or league standings any more. Thanks to a growing set of tools, journals remain accessible during the transition period. The old adage “everybody who needs access has access”, once used to resist open access campaigning, has finally become true – without subscriptions. We just need to take advantage of the new circumstances. After the transition, nobody needs access to journals that do not exist any more, so the enabling properties of this toolset are decisive here and the set not comprising a solution becomes irrelevant: we have the solutions, as Herbert so eloquently explains. What we need are enabling technologies. We have those now, too. Most journals won’t survive being cut off from all funding.
And yet, faculty continue to chase journal spots as vehicles for their discoveries from which they hope to harvest sufficient prestige just to keep going. Without removing this source of prestige, faculty and students/postdocs have little other choice than to reject the better vehicles we now could offer. This is the main reason why 25 years of campaigning for scholarly infrastructure reform have barely brought scholarship to embrace the web of 1995 (in the words of Jon Tennant). Journals, the square wheels, are the main physical obstacles to the technology Herbert describes in his presentation. They need to go. That is a rational solution that targets the root of the problem. If scholars can find the Higgs Boson, I’m confident they can find other sources of prestige once journals have ceased to exist – should they decide that chasing prestige is a functionality they wish to replicate.
Coincidentally, journal subscriptions also usurp most of the funds required for implementing Herbert’s solutions – the round wheels. Canceling subscriptions hence serves two main purposes: removing the main obstacle for scholars using modern information technology and freeing up funds to implement said technology: removing the square wheels and replacing them with round wheels. Subscription journals are the keystone in the current scholarly communication arch: remove them and it all falls apart. Any journal-like functionality that scholars value is easily recreated with modern technology, but with new functionalities and few, if any, of the current disadvantages and unintended consequences.
Finally, with scholars so busy chasing excellence, chances are slim to none they will ever ask for round wheels, as so many librarians I speak to seem to hope for.