The question in the title is serious: of the ~US$10 billion we collectively pay publishers annually world-wide to hide publicly funded research behind paywalls, we already know that only between 200-800 million go towards actual costs. The rest goes towards profits (~3-4 billion) and paywalls/other inefficiencies (~5 billion). What do we get for overpaying such services by about 98%? We get a literature that essentially lacks every basic functionality we’ve come to expect from any digital object:
- Limited access
- No global search
- No functional hyperlinks
- No data visualization
- No submission standards
- (Almost) no statistics
- No text/data-mining
- No effective way to sort, filter and discover
- No scientific impact analysis
- Lousy peer-review
- No networking feature
Moreover, inasmuch as we use the literature (i.e., in terms of productivity and/or journal rank) to help us select the scientists for promotion and funding, we select the candidates publishing the least reliable science.
Taken together, we pay 10 billion for something we could have for 200 million in order to buy us a completely antiquated, dysfunctional literature that tricks us into selecting the wrong people. If that isn’t enough to hit the emergency brakes, what is?
We may not be able to buy paradise with 10b annually, but with such a low bar, it’s easy to get anything that’s at least not equally abysmal. The kind of modern technology we can buy would probably solve most of the most pressing issues with our literature, cover all our needs in terms of data and make sure we can cite and reuse all scientific code in a version-controlled manner – and then leave a few billion to play around with every year.
With the fruits of our labor firmly in our own control, we would have a flourishing market of services, such that whenever our digital infrastructure would lack the functionalities we expect or becomes too expensive, we can either switch service providers or hire our own experts without loosing our content.
As an added benefit, cutting the steady stream of obscene amounts of money to a parasitic industry with orthogonal interests to scholarship would prevent further buyouts of scholarly ventures such as Mendeley or SSRN and with it the disappearance of valuable scholarly data in the dark underbelly of international corporations.
One reason often brought up against canceling subscriptions is that faculty would complain about the lack of access subscription cancellations would entail. However, already published literature can in principle (although substantial technical hurdles still need to be overcome) be accessed via a clever combination of services such as automated, single-click inter-library loan, LOCKSS and Portico. Moreover, some libraries have seen substantial cost-savings by canceling subscriptions and instead supporting individual article downloads. Finally, institutional repositories as well as pre-print archives need to be leveraged whenever the publisher-version isn’t available. With such an effort, most users likely wouldn’t experience more than maybe a few hiccups, but they’re already used to patchy access anyway, so it wouldn’t look vastly different from what people are experiencing now. Thus, it is technically feasible to cancel all subscriptions in a way that most users probably wouldn’t even notice it. Add to that an information campaign that alerts users that while no major disruption is anticipated during the transition, some minor problems may arise, and everyone will support this two decades overdue modernization.
Another reason often provided is that the international cooperation between institutions required for such a system-wide cancellation to be effective were impossible to accomplish. That is a problem less easily dismissed than the supposed lack of access to the literature. After all, some governments explicitly don’t want their institutions to cooperate, they want them to compete, and even to develop “world-class competitiveness” (page 17):
In this regard, I recently listened to a podcast interview with Benjamin Peters, author of “How not to network a nation“. His description of the failure to develop the internet in the Soviet Union (compared to the successful developments in the West) reads like an account of how not to make open access a reality:
the American ARPANET took shape thanks to well-managed state subsidies and collaborative research environments and the Soviet network projects stumbled because of unregulated competition among self-interested institutions, bureaucrats, and others.
We need the same collaborative spirit if institutions are to abandon subscriptions, just as they cooperated to spend money to draw cables between institutions, even though they were separated by borders and even oceans. If in the 1980s, our institutions collaborated across nations and continents to spend money on a research infrastructure nobody yet knew, can’t they collaborate now to save money being wasted on an obviously outdated infrastructure? Has neoliberal worship of competition poisoned our academic institutions to such a degree, that within 25 years they went from cooperating even if it means spending money to never cooperating even if it means saving money? I refuse to believe that, even though that’s what some try to tell me.
Instead of trying to tell scholars to behave ethically in spite of the downsides to them personally, maybe we ought to tell institutions that our infrastructure is outdated and that we need the functionalities everybody else but academics are enjoying. We need to get the same mechanisms going that got our universities to invest in networking hardware and cables, despite a functioning telephone and snail mail system. Cancelling subscriptions doesn’t mean losing access, so nobody can tell me that canceling subscriptions is more difficult than installing compatible networking hardware across the globe. I’m now paying for my phone bills out of my budget, while my skype calls are provided. Maybe rather than trying to convince scholars to choose the ethical over the advantageous, it would be more effective to ask our institutions to provide us with modern technology and have those who still want to use the legacy version pay for it themselves?
Framing our issue as an ethical one (“the public needs access to publicly funded research!”) may work, but it is a slow, ineffective and uncertain approach. Framing it as merely a technical modernization strikes me as quick, straightforward and effective.
UPDATE (May 23, 2016):
Some people have asked for evidence that canceling subscriptions and instead relying on individual downloads can save money. Besides hearsay from several libraries, this is a piece of evidence which should be quite convincing that this can work for some publisher/library combinations (click for larger version):
So if libraries were to cooperate in identifying more such opportunities and then cleverly combine this action with LOCKSS and/or Portico as well as smart (single-click) ILL between those libraries whose Big Deals have already run out and those which are still running, given enough participants, almost all of the already published literature ought to be accessible. It may not be trivial, but it’s definitely feasible and the technical problems are not the main obstacle – it’s the collaboration that needs to be established. Moreover, this only needs to work for a relatively short time, until most of the journals have run dry of funding and ceased to exist.
Triggered by online discussions, a few hypothetical use cases:
- A user is requesting an older document from a journal that their institution longer subscribes to, but was accessible in the past. The link resolver checks if it can be served via LOCKSS or Portico. If not (weird that LOCKSS/Portico would not serve if content was already purchased before!?), then the resolver extracts the article meta-data (importantly, the DOI for use with DOAI) and screens the institutional repositories or places such as PubMedCentral, arXiv/bioarxiv or ResearchGate for the document. If all that fails, the link resolver checks for ILL availability with an institution whose Big Deal has not expired, yet. If none of these 3/4 services can serve the document, then ask user if they want to send copy request to author (single click) or download individual article and pay the fee (faculty get informed about their usage/costs!).
- A user is requesting a brand new article from a journal that their institution is not subscribing to. The link resolver extracts the article meta-data (importantly, the DOI for use with DOAI) and checks for availability in repositories. If unavailable, check for ILL. Both not available, ask user if they want to send copy request to author (single click) or download individual article and pay the fee (faculty get informed about their usage/costs!).
- A user is requesting an older document from a journal their institution never subscribed to. The link resolver extracts the article meta-data (importantly, the DOI for use with DOAI) and checks all relevant repositories, then ILL. If both fail ask user if they want to send copy request to author (single click) or download individual article and pay the fee (faculty get informed about their usage/costs!).