Over the last ten years, scientific funding agencies across the globe have implemented policies which force their grant recipients to behave in a compliant way. For instance, the NIH OA policy mandates that research articles describing research they funded must be available via PubMedCentral within 12 months of publication. Other funders and also some institutions have implemented various policies with similar mandates.
In principle, such mandates are great not only because they demonstrate the intention of the mandating organization to put the interest of the public over the interest of authors and publishers. They also can be quite effective, to some extent, as the NIH mandate or the one from the University of Liège.
At the same time, such individual mandates are suboptimal for a variety of reasons, e.g.:
- In general, mandates are evidence that the system is not working as intended. After all, mandates intend to force people to behave in a way they otherwise would not behave. Mandates are thus no more than stop-gap measures for a badly designed system, instead of measures designed to eliminate the underlying systemic reasons for the undesired behavior.
- Funder mandates also seem to be designed to counter-act unintended consequences of competitive grant awards: competitive behavior. To be awarded research grants, what counts are publications, both many and in the right journals. So researchers will make sure no competitor gets any inside information too early and will try to close off as much of their research for as long as possible, including text, data and code. Mandates are designed to counter-act this competitive behavior, which means that on the one hand, funders incentivize one behavior and on the other punish it with a mandate. This is not what one would call clever design.
- Depending on the range of the behaviors intended to control, mandates are also notoriously difficult and tedious to monitor and enforce. For instance, if the mandate concerns depositing a copy of a publication in a repository, manual checks would have to be performed for each grant recipient. This is the reason the NIH have introduced automatic deposition in PMC. If re-use licenses are mandated, they also need to be tested for compliance. If only certain types of journals qualify for compliance, the 30k journals need to be vetted – or at least those where grant recipients have published. Caps on article processing charges (APCs) are essentially impossible to enforce, as no funder has jurisdiction over what private companies can ask for their products, nor the possibility to legally monitor the bank accounts of grant recipients for possible payments above mandated spending caps. Here in Germany, our funder, the DFG has had an APC cap in place for more than 10 years now and grant recipients simply pay any amount exceeding the cap from other sources.
- In countries such as Germany, where academic freedom is written into the constitution, such individual mandates are considered an infringement on this basic right. There currently is a law suit in Germany, brought by several law professors against their university for mandating a deposit of a copy of all articles in the university’s repository. In such countries, the mandate solution is highly likely to fail.
- Mandates, as the name implies, is a form of coercion to force people to behave in ways they would not otherwise behave. Besides the bureaucratic efforts needed to monitor and enforce compliance, mandates are bound to be met with resistance by those coerced by the mandate to perform additional work that takes time away from work seen as more pressing or important. There may thus be resistance both to the implementation and the enforcement of mandates that appear to be too coercive, reducing the effectiveness of the mandates.
For about the same time as the individual mandates, if not for longer, funders have also provided guidelines for the kind of infrastructure the institutions should provide grant recipients with. In contrast to individual mandates, these guidelines have not been enforced at all. For instance, the DFG endorses the European Charter for Access to Research Infrastructures and suggests (in more than just one document) that institutions provide DFG grant recipients with research infrastructure that includes, e.g., data repositories for access and long-term archiving. To my knowledge, such repositories are far from standard at German institutions. In addition, the DFG is part of an ongoing, nation-wide initiative to strengthen digital infrastructures for text, data and code. As an example, within this initiative, we have created guidelines for how research institutions should support the creation and use of scientific code and software. However, to this day, there is no mechanism in place to certify compliance of the funded institutions with these documents.
In the light of these aspects, would it not be wise to enforce these guidelines to an extent that using these research infrastructures would save researchers effort and make them compliant with the individual mandates at the same time? In other words, could the funders not save a lot of time and energy by enforcing institutions to provide research infrastructure that enables their grant recipients to effortlessly become compliant with individual mandates? In fact, such institutional ‘mandates’ would make the desired behavior also the most time and effort saving behavior, perhaps making individual mandates redundant?
Instead of monitoring individual grant recipients or journals or articles, funders would only have to implement, e.g., a certification procedure. Only applications from certified institutions would qualify for research grants. Such strict requirements are rather commonplace as, e.g., in many countries only accredited institutions qualify. Moreover, on top of such general requirements, there can be very specific infrastructure requirements for certain projects, such as a core facility for certain high-throughput experiments. In this case, the specifications can even extend to certain research and technical staff and whether or not the core facility needs permanent staffing or temporary positions. Thus, it seems, such a certification procedure would be a rather small step for funders already set up to monitor institutions for their infrastructure capabilities.
If groups of funders, such as cOAlition S, coordinated their technical requirements as they have been coordinating their individual mandates, the resulting infrastructure requirements would include FAIR principles, which would lead to a decentralized, interoperable infrastructure. under the governance of the scientific community. As this infrastructure is intended to replace current subscription publishing with a platform that integrates our text-based narratives with our data and code, it would be straightforward for the funders to suggest that an obvious source of funds for the required infrastructure would be subscriptions. As most scholarly articles are available without subscriptions anyway and implementing the infrastructure is much cheaper, on average, than subscriptions, the implementation should be possible without disruption and with considerable cost reductions for the institutions. If an institution considers their library to be the traditional place where the output of scholars is curated, made accessible and archived, then there would not even have to be a redirection of funds from library subscriptions to different infrastructure units – the money would stay within the libraries. But of course, institutions would in principle remain free to source the funds any way they see fit.
Libraries themselves would not only see a massive upgrade as they would now be one of the most central infrastructure units within each institute, they would also rid themselves of the loathsome negotiations with the parasitic publishers, a task, librarians tell me, which no librarian loves. Through their media expertise and their experience with direct user contact libraries would also be ideally placed to handle the implementation of the infrastructure and training users.
Faculty would then enjoy never to have to worry about their data or their code ever again, as their institutions would now have an infrastructure that automatically takes care of these outputs. Inasmuch as institutions were to cancel subscriptions, there also would be no free/paid alternative to publish than the infrastructure provided by the institutions, as the cash-strapped publishers would have to close down their journals. Moreover, the integration of authoring systems with scientific data and code makes drafting manuscripts much easier and publication/submission is just a single click, such that any faculty who values their time will use this system simply because it is superior to the antiquated way we publish today. Faculty as readers will also use this system as it comes with a modern, customizable sort, filter and discovery system, vastly surpassing any filtering the ancient journals could ever accomplish.
Taken together, such a certification process would only be a small step for funders already inclined to push harder to make the research they funded accessible, save institutions a lot of money every year, be welcomed by libraries and a time saver for faculty, who would not have to be forced to use this conveniently invisible infrastructure.
Open standards underlying the infrastructure ensure a lively market of service providers, as the standards make the services truly substitutable: if an institution is not satisfied with the service of company A, it can choose company B for the next contract, ensuring sufficient competition to keep prices down permanently. For this reason, objections to such a certification process can only come from one group of stakeholders: the legacy publishers who, faced with actual competition, will not be able to enjoy their huge profit margins any longer, while all other stakeholders enjoy their much improved situation all around.
There’s a problem with shifting the incentives to the institutions: the university doesn’t necessarily care about its own researchers getting external funding, or it cares only in principle. More often than not, at least in Italy, the researchers who get external funding fight a lonely battle (or at any rate feel they do). Other people in their institution may even have incentives to make them fail.
Researchers can’t freely move their research group anywhere they want, so you’d need to introduce a possibility that they can reach the same “certification” by adhering to some other entity, say a interuniversity research centre or a library consortium which will then impose some requirements. But then again what’s the difference? You’re still forcing someone to accept legal obligations, or relying on someone else to impose them. It doesn’t feel like joining an exclusive club where you get support you get nudged to change your work and your way of doing it (to the point you might also share more data and go towards open science as well).
For simple open access goals, the main obstacle is copyright. Just make sure that the funder has (total, non-exclusive) copyright on the publication and then they can automate its archival and distribution. Achieving this is legally simple: just make the author sign away their copyrights entirely to the funder before they receive the money, just like any novelist (so very legally tested: my money for your copyright), with the understanding that the funder will distribute it under a free license like CC-BY. Then any subsequent racketeering contract signed by the author will be null and void. To make sure the publisher doesn’t claim their own copyright on the published version, you may want to have CC-BY-SA instead and have a policy that the funder only uses public licenses.
Then again, is this really more politically palatable than a clear mandate on the author? The author gives away their copyright as they would anyway with the legacy publishers, and they get someone else to handle the distribution and legal enforcement as they currently do with their employer or publisher. But you’re still creating a single point of failure. Clearly it would be easier to just change the law so that any eligible entity can handle the archival, and give up on pointless lawsuits which can only make things worse for the authors overall.
Thank you very much! The point about institutions not supporting their faculty in research is an interesting one. If that were the case, they should have no subscription budget, either? Textbooks should be sufficient for teaching?
In general, there is a problem in how to be inclusive towards non-affiliated scholars (and scholars at institutions without support are de facto unaffiliated, with regard to the infrastructure). In these cases, I think, a second affiliation would be an option, where the user would be able to use the infrastructure. But I admit that this is an unsolved problem. However, this is likely also a problem now already?
The overarching, main goal can no longer just be access to papers. Ever since Unpaywall, Sci-Hub, R4R, etc. access is not really a pressing problem any more. Larger problems, IMHO, are, e.g., reproducibility, functionality and affordability, to name only three. All three can be addressed effectively (but maybe not completely solved, not sure) by an open infrastructure (that happens to provide OA along the way). Conversely, only providing OA would leave the most pressing concerns unaddressed. In fact, addressing only OA in an isolated way, may end up making things worse. I therefore think we need to solve the other three problems first, as they will provide OA as a side effect.
I’m not entirely sure I can completely follow your other arguments. Personally, I always find forcing adult individuals to do something they intrinsically would not want to do particularly distasteful if there is a way to design the system within which they behave such that they will want to perform this behavior of their own volition. Thus, forcing institutions to form a system within which individuals will want to perform the behaviors we’d like them to do, seems to me more attractive than keeping the non-conducive system and force the individuals. Did that make sense at all?
Sure, the intention is very clear and I share it.
The goal is for the research institutions (broadly defined) to provide better support for all researchers to do the right thing by default. If you don’t have direct power on them you resort to indirect inducement via their employees. Changing the requirements of mandates can make them more or less effective but doesn’t change the dynamics. You’re using carrot and stick to change behaviour.
I just suspect that if you buy in the “you can’t force people” argument, there’s no end to it. Just reject it. Taxes on cigarettes to reduce their consumption are widely accepted, although it took a few decades to get there. You don’t get to oppose them on the grounds of “it’s my freedom of cultural expression”.
Ok, I see where you are going, I think. In the best of all worlds, institutions would do what they did with TCP/IP, http and HTML in the late 80s and early 90s: “hey, this is cool, this will likely improve a lot f things around here, let’s invest in this new technology” – and we had the internet (I’m simplifying slightly of course 🙂
Today, institutions go “oh, our rankings are down, let’s invest in two FTEs to massage our data to make us look better”. Given this radical shift from more substance to more show in the last 30 years, I only see carrots and sticks for short term change.
Obviously, the long-term goal would need to be a shift away from show, back to substance!