bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 26 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 138 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 440 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 701 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1891 downloads 0.00 KB
Download
Nov22

Prioritizing academic publishers

In: science politics • Tags: competition, lobbyism, markets, procurement rules, substitutability

The debate over how publishers use the large “non-publication costs” (Fig. 1) that they incur and academic libraries, mainly, are funding has been going on for some time now. Above and beyond the cost items we discuss in our paper on publication costs, it has been established that investments in surveillance technology are also part of the publisher spending academic libraries are financing. In an informal brief generated by the German GFF, the legal expert who authored it describes a situation with clear indications that one cost item which we discuss in the paper has been particularly detrimental for the scholarly community and the tax payer: lobbyism.

non-publication costs
Fig. 1: More than half of an average scholarly article price goes towards non-publication costs. Source. The question is: what are the academic publishers spending this revenue on?

Even before Franz Inglefinger coined his eponymous rule that every scholarly article must only be published in a single journal, have scholarly journals been a collection of monopolies. Every article is only available at a single source and so there is no market and no competition. This lack of substitutability has not only been the main reason for exempting subscriptions of scholarly journals from procurement rules (“single/sole source exemption”), but of course also behind the supra-inflationary price rises to the detriment of the tax payer funding public research and teaching institutions (Fig. 2).

serials crisis
Fig. 2. Supra-inflationary price increases for academic journal subscriptions

When the late Jon Tennant and I filed our formal complaint to the European Commission in 2018, in which we detailed how scholarly journal publishing was not a market but a collection of small monopolies, we had no idea that the EC was already well aware of that fact and saw nothing wrong with it. In fact, their reply at the time surprised us, when it indicated that the EC concurred with our description of scholarly journals being collections of monopolies, but saw levers for regulation/mitigation elsewhere.

Today, I have been privy to the informal brief written by a legal expert of the GFF mentioned above. It cites two prior EC instances from 2003 and 2015 where the EC had already acknowledged the lack of a genuine market due to the lack of substitutability (the reply to our complaint is thus just one in a long list of such documents acknowledging the lack of competition in scholarly publishing). In these documents, the EC writes, for instance:

In particular, from a demand-side point of view, it is rare that two different publications can be viewed as perfect substitutes, as there are differences in the coverage, comprehensiveness and content provided. Therefore, in terms of functional interchangeability, two different publications could hardly be regarded as substitutable by the end-users, the readers. On that basis, the Commission found that consumers will rarely substitute one publication for another following a change in their relative prices

and

Publications for different academic subjects are clearly not substitutable from the reader’s point of view. Even within a given discipline, there may be little demand side substitution from the point of view of the individual academic between different publications.

and

In this case, a strict demand approach would lead to the definition of a multitude of relevant markets of imprecise boundaries and small dimensions.

“Small dimension” of course meaning that every article would constitute its own market. The term “demand-side” here is defined as both readers, i.e., academics as well as the institutions paying for the journals, i.e., libraries, while “supply-side” refers to the publishers:

The results of the market investigation in the present case confirmed the relevance of supply-side considerations for the definition of the relevant product market in the academic publishing sector. Publications in different academic subjects are indeed not substitutable from the readers’ perspective. However, many academic publishers appear to be active across most of the possible segmentations of the market, and offer publications covering several disciplines

The quote here is an example of how the EC is well aware of the conflicting interests between readers and libraries on the one hand (demand-side) and publishers (supply-side) on the other, while at the same time expressing a clear prioritization (“confirmed the relevance”) of the interests of the supply-side over the interests of the demand-side. The dysfunctionality of the current situation for readers and libraries is understood, acknowledged and dismissed by the EC as “not relevant” – very similar to the reply we received for our formal complaint. In this particular quote, a fig-leaf is offered by stating that the big publishers cover many scholarly fields, leading to each library having contracts with several publishers, giving the superficial impression that there would be several suppliers in a “supply-side” market. The sentence just prior, however, makes it clear that this is, in fact, not really a genuine market, but one that exists only on paper, solely for regulatory purposes.

The GFF legal expert expressed his assessment that courts would be unlikely to challenge these EC market definitions as the definitions are seen as the lesser of two evils: explicit confirmation of the publisher monopolies would entail the loss of the market and with it loss of regulatory power. In other words, if the publishers were not operating within a market, they would also not be subject to regulatory market oversight. The consequence would be completely unregulated mergers between the most dominant players, which is seen as worse than the status quo.

His assessment is not changed by the “single source exemption” from procurement rules: as these rules and anti-trust regulations are separate jurisdictions, the fact that spending rules define academic publishers as monopolists while according to anti-trust regulators they are not, may be a logical contradiction, but not a legal one.

Thus, there are multiple reasons why the EC chose to merely acknowledge the interests of the scholarly community and instead define markets from the perspective of publishers. At this point, it is impossible to tell how much each reason has influenced the authors of these documents. It may not even be all that important to find this out in order to inform the scholarly community on the implications from this legal assessment. As I see it, there are two main lessons to be drawn from this legal analysis:

  1. If the scholarly community strives to achieve a genuine publishing market with competition, we have to design it ourselves. Politicians or other decision-makers outside of scholarship will not perform this task for us. There is not going to be any help from outside. In the lack of a clear market alternative, regulators prefer a group of relatively small monopolists over few large or even a single monopolist. The scholarly community is the only actor who can provide this missing alternative.
  2. Among the “non-publication costs” is a sizeable chunk of money that is being used in various legislatives around this globe to convince legislators and regulatory bodies that the interests of the academic publishing corporations outweigh the interests of the tax payer. This constitutes yet another example of how the scholarly community provides funds to publishers which then get used against the interests of the scholarly community.

From these lessons follows a clear strategic way forward: if the scholarly community wants to escape from the parasitic relationship with these corporations, it needs to create a genuine market alternative such that regulators need not fear the complete loss of market regulations. Luckily, there now are ways to create such a market and it is fully within the powers of the scholarly community to create it on its own. For this market to become a market of service providers, the scholarly content needs to come back under the control of the scholarly community. Once this market exists, the scholarly community must lobby at the relevant bodies to ensure both that “demand-side” interests receive at least the same “relevance” as “supply side” interests and that the “single source exemption” from procurement rules is annulled for scholarly publishers.

Like this:

Like Loading...
Posted on November 22, 2021 at 16:29 2 Comments
Oct08

The trinity of failures

In: science politics • Tags: infrastructure, journals, open science, publishing, replacement
More and more experts are calling for the broken and destructive academic journal system to be replaced with modern solutions. This post summarizes why and how this task can now be accomplished. It was first published in German on the blog of journalist Jan-Martin Wiarda.

Front cover of  the now-vanished  Australasian Journal of Bone & Joint Medicine . Source: Scan from the-scientist.com website 

SOMETIMES PAPER IS USEFUL: The gritty scan above from The Scientist’s website is one of the few traces left of the Australasian Journal of Bone & Joint Medicine. The journal was part of a bundle of fake journals that the academic publisher Elsevier offered as peer-reviewed journals, but which were actually paid advertisements – including for the drug Vioxx from the pharmaceutical company Merck, which was withdrawn from the market after unreported deaths surfaced and were litigated. 

The nine fake magazines are just one example of decades of exploitation of science by academic publishers: From their attempts to either criminalize Open Access via the Research Works Act or discrediting it with PR, to fighting green open access, to going behind the backs of negotiating partners, or, currently, surveilling scientists by installing tracking technologies on their publishing platforms – as well as constructing a new Silk Road of Science Communication through collaboration with Chinese authorities. At a time when the political culture (and tribal) wars have spilled over into the field of science and when trust in the honesty and reliability of research has become more important than ever, the scholarly publication system through which science enters the public, is becoming more and more dubious.

Perhaps not entirely coincidentally, for the biggest publishers, their publications have only played a relatively minor role for about ten years now. Long-term pricing schemes have provided them with steady revenue that exceed publication costs by around a factor of ten. This windfall has been filling their war chests for numerous acquisitions that have enabled the publishers to rebrand themselves as data analytics businesses, analogous to the large Internet companies. The purchased tools now span the entire research life cycle: Databases, electronic laboratory notebooks, analysis tools, authoring systems and bibliometrics are linked with the publishing platforms to create a “live trap for researchers” from which data can be continuously extracted in order to monetize them.

Ireland’s Science Foundation, for example, has just signed Elsevier to analyze its future direction, and no one asked where the “vast array of data” that the company brought with it, actually came from.

A trinity of failures: reliability, affordability, functionality

This development leads to a trinity of failures in the academic publication infrastructures:

o The systemic pernicious incentives of publish or perish fuel the replication crisis in many disciplines: publications in the most renowned journals promote careers, but at the same time the most unreliable science is published there.

o The commercialization of Open Access turns the long-standing serials crisis into an article crisis: the costs rise inexorably if only the direction of the payments is turned from reading to publishing and the system otherwise remains under the regime of monopolists (which have long since outgrown the status of major publishers), and continues to provide profits free from competition.

o The focus of the ex-publishers on user data, of researchers on publications and of institutions on cash flow and rankings led to a functionality crisis in which some of the most basic digital functionalities remain out of reach for research objects.

We can now switch at any time

Precisely because the scientific journals are at the center of this trinity of failures, experts have been calling for a modernization of information infrastructures in science for at least 15 years. In fact, the first calls for radical reform can be traced all the way back to the the late 1990s. Several possible alternatives are now available, such that we could switch at any time now.

Out of these alternatives, let’s pick the publication platform “Open Research Europe” (ORE) as only one example. Researchers funded by the European Union publish in ORE free of charge and open access. ORE is a platform owned by the EU and not by a publisher. The EU can replace the publication service provider with a different one if it is not satisfied with the provider. This creates real competition for publication services that is impossible in the reputation-based journal system. ORE is part of the “Open Research Central” (ORC) service, in which publication platforms such as ORE are aggregated into a common literature corpus, so that all institutions can, in principle, find their place here. 

The academic publishers were already a platform economy long before the well-known Internet companies adapted this principle in order to secure their dominant market position. Also the legacy publishers abuse their unassailable position in the scientific system, which links publications with the collection of metrics of prestige. Long before services like ORC were developed, it was therefore clear that a journal replacement would have to make a clean cut with this logic of the platform economy: because it locks-in scientific communication in a similar way how WhatsApp and other messengers lock-in private communication – even though before these messengers there were already protocols and standards for e-mail that enabled the exchange of service providers. The recent Facebook downtime again reminds us all that protocols and standards are superior to platforms.

Consequently, the traditional magazines must be replaced by a decentralized, resilient, evolvable network where everything is connected by open standards and protocols, that much has been clear for at least two decades now. Such a network allows for seamless switching from one provider to another under the control of the scientific community and would allow the journal article as the only scientific output that “counts”, to fade into the background. Instead, the focus would then be on the interwoven web of text, data and code, which would provide a much better orientation function for scientific knowledge than the journals have ever been able to do. The concept behind services like ORC is aimed precisely at developing such a decentralized information infrastructure.

It can work without slaughter

Academic journals are far from such an infrastructure and are therefore either considered already dead, or one would like to ensure that they will be soon: “Slaying the journals” is the slogan of one initiative, and in view of the painful experiences with journals over the decades, this version of “eat the rich” will certainly sound attractive to a number of scholars.

Not invoking any kind of slaughter, we have recently posted our own detailed proposal for a solution that contains two novel approaches for tackling the trinity of failures:

  • First, we propose that research funders expand the existing minimum requirements for the infrastructure of recipient institutions, namely by the decentralized information infrastructures mentioned above. In that way, the journal alternatives become the staple of good (and ultimately more prestigious) scientific practice. The decades-long standstill in digital infrastructures at academic institution evinces that these modernization incentives are obviously needed in order to phase out funding of the overpriced and outdated journals and instead invest in modern technologies. 
  • Second, we suggest to establish open standards and open source norms that guarantee an efficiently functioning market and prevent further monopolies on scientific tools or output. This is elementary because so far the solutions developed from the scientific community have been bought up by the relevant players so that no alternative ecosystems can develop. Such open standards already exist and promise, like the FAIR principles, to ensure scientific quality and good practice. In order to expand, secure and enforce these standards and norms, we propose setting up a standardization body like the World Wide Web Consortium (W3C) – under the governance of the scientific community, to enable the development of open scientific infrastructures that support the entire research process.

In the long-standing discussion about an academic journal replacement, opposition has traditionally been voiced by referring to journal prestige, the guidance it offers, and how this issue could not be resolved without journals. However, this argument misses two essential aspects: First, the reputation of the journals lacks any empirical basis and therefore does not represent a valid source for evaluations. To combat the wide-spread misuse of journal prestige, initiatives have sprung up such as the San Francisco Declaration of Research Assessment, to which now both the German funder DFG and the European Research Council are now signatories. Second, we know from the data that the ex-publishers sell, that there is, of course, an endless number of quantitative and qualitative evaluation options that are based entirely on the researchers’ daily interactions with their digital research objects. However, these fine-grained data are not in the hands of science, but in the possession of the ex-publishers, who, as the chairwoman of the German Council for Information Infrastructures, Petra Gehring, recently wrote, “consider the entire intellectual cycle of publicly funded and hence free research, as their future product”. Soon, such algorithmic employment decisions may make the misuse of journal rank seem benign in comparison.

Will scholarship keep repeating the same mistakes?

The Irish example can serve to illustrate Gehring’s tough statement: “The globalized struggle for profitable markets that these new forms of value creation offer, was not recognized.” It would be downright absurd for publicly funded science to repeat the same mistake now also with research and user data, as it did with publications: first the taxpayer pays for the production and then again when science buys back its own product. If the current system were to be continued, scholars would again be at the mercy of the corporations, and neither could the funding agencies ensure that society benefits from the knowledge and the potential for added-value in the data.

The current debate about digital sovereignty makes no sense without a decisive redirection of funds away from journals and towards open infrastructures and their upgrade towards a basis for building reputation. After 30 years of stagnation, this redirection would finally put an end to the trinity of failures in the academic publication system.

I am grateful to Renke Siems who drafted the first version of the German article and who insisted on more diplomatic language than I usually use.

Like this:

Like Loading...
Posted on October 8, 2021 at 13:56 Comments Off on The trinity of failures
Sep23

Algorithmic employment decisions in academia?

In: science politics • Tags: employment, publishers, surveillance, tracking

According to a recent study, employee surveillance is rampant in today’s corporate work environment. This study documents how, often under the pretense of cybersecurity or risk analysis (sort of like academic publishers, actually), companies analyze the behavioral data they collect from their employees to help them make “evidence-led”, i.e., algorithmic employment decisions. Some of the tools are used to assign risk scores to employees and to categorize them into risk groups:

Image

A company in ‘Workplace Analytics’ is selling a product called ‘OccupEye’ that tracks employees with movement sensors mounted under their desks. They have partnered with network giant Cisco that uses WiFi routers to derive movement patterns within rooms and buildings. Besides movement detection and text analysis, mouse-clicks and keystrokes (also as suggested by academic publishers) can be analyzed and incorporated into the algorithms as well, in order to, e.g., classify employees as “Low Performer”, “Good Performer” or “High Performer”, as Zalando is doing (p. 135 in the study).

Surely, academia would never use performance metrics for their hire and fire decisions? OK, bad joke.

Those academics who lament that not being allowed to use journal rank and other metrics to evaluate other researchers would deprive them of objective means to distinguish between applicants may very soon have all their wishes fulfilled and then some.

Ever since we calculated that only a very small percentage of what our libraries are paying publishers goes towards publishing costs and profits, many (including us) have wondered, where the rest of the money is going. Interestingly, Elsevier’s Paul Abrahams, who commented in the discussion, did not mention where their non-publishing costs accrue, but stated that their overall rejection rate was 77% at Elsevier. Using current market rates for publishing services and assuming that Elsevier does not spend more on publishing than current market rates, I calculated that publishing an average article likely costs Elsevier US$574.74 (our scenario B), i.e., very close to the estimated average per-article publishing costs for the industry:

Image

If Elsevier’s revenue per article also were to clock in at the industry-average of around US$4,000, then each article would provide Elsevier with about US$1,200 in profits, given their profit margins of just above 30%. If our industry-average calculations really were to match the numbers of industry giant Elsevier that well, one may wonder what Elsevier is doing with the remaining US$2,200 per article that go neither towards publishing nor towards profits?

Of course, we can only guess, but for clues, one may look at Elsevier’s tweet from yesterday:

https://twitter.com/ElsevierConnect/status/1440632982990049297

Elsevier states that “knowledge and analytics” is what they do today – publishing is only mentioned as their “roots” in the past, not something that is relevant today or, let alone, in the future. The link Elsevier promotes in that tweet points to a page on their own site where it also says: “Elsevier has really positioned itself as a data science company” and Elsevier adds right below it: “Evidence-led decisions”.

Image

What sort of “data analytics” is used for what kind of “evidence-led decisions”, you may ask? I’m getting to that.

The acquisitions of Elsevier parent company RELX confirm this transition to “data analytics” and “evidence-led decisions”:

Image
source: https://knowledgegap.org/index.php/sub-projects/rent-seeking-and-financialization-of-the-academic-publishing-industry/preliminary-findings/

For 20 years now, RELX and Elsevier have invested in academic data analytics. But what kind of data are they analyzing and for which “evidence-led decisions”? Funny you should ask: they are analyzing your data of course! Here is a graph that shows what Elsevier has been buying over these last two decades when you (like me) may have been too busy fighting for #openaccess to pay attention (grey box denotes just publishing):

Image
source: https://knowledgegap.org/index.php/sub-projects/rent-seeking-and-financialization-of-the-academic-publishing-industry/preliminary-findings/

Whenever you interact with any of these tools, your user data is being collected and analyzed by Elsevier. After all, this is their core business, as they themselves state. Elsevier’s parent RELX is one of the largest data brokers on the planet and owns, among others, Lexis Nexis. This means that RELX can now combine your professional data with your private data and sell it. For instance, Lexis Nexis is selling their collected data and analytics to law enforcement agencies that are not allowed to collect such data themselves. Why shouldn’t Elsevier/RELX sell these data right back to academic institutions from where at least the professional part of these data has been harvested?

That may sound dystopian to some, but if you are a dean or a provost or a member in a search or tenure committee, the above may actually sound quite alluring. In that case, you may perhaps feel relieve that very soon now the possibility becomes available to license products that allow you to use not just publication records, but real-time professional and private user data to automate the classification of applicants and help you make “evidence-led” employment or funding decisions. No more need to invite anybody for interviews or be surprised by the political positions of future faculty. Moreover, you can rest assured that acquisition, development and implementation of these tools was sponsored by the tax-payer money your institution has been overspending on publishers in the past. Obviously, for you and people like you, this was money well spent! This nice fact ensures that, just like the publications in the last decades, your institution will be paying twice for such useful tools: once for their acquisition/implementation and then again, when your institution actually licenses them.

If you happen to find these possibilities enticing, all you need to do is – nothing: just let your institution keep overpaying publishers and soon the rebate offers for faculty surveillance tools will start flowing in. After all, every institution in the Netherlands has already licensed such a tool in exchange for open access publishing with, you guessed it, Elsevier. So all you need to do is sit still and wait for big brother to swing by your revered academic institution and say ‘hello’ also to you. And just in case you think only Elsevier is using our payments to expand their vertical integration, think again:

This post is probably best closed by quoting the Science Foundation Ireland who, after the Netherlands, have also partnered with Elsevier for similar reasons, it seems:

We have partnered with Elsevier, which is probably best known as a scientific publisher. They have access to a vast array of data, and this will help us to establish where Ireland is good and nearly good in emergent and convergent technologies. It will also help us decide on the actions we need to take to make us really good. For example, we might see a certain field where we are nearly good at present and find that we need to recruit top talent or run new funding calls to support it.

Like this:

Like Loading...
Posted on September 23, 2021 at 11:45 2 Comments
Jul19

What sinister time machine?

In: science politics • Tags: infrastructure, time travel, transformative agreements

A recent OASPA guest post reminded me of something I have been wondering about for several years now. What sinister time travel device is keeping some sections of the scholarly ecosystem from leaving the past and coming back to the present? The guest post only stands in for many international examples of what most of the discussions about “Transformative Agreements” reverberate around and only serves as the latest prompt for me to finally jot these observations down. There are three things these discussions on transformative agreements have in common and that crop up time and again:

  1. They all seem to treat access to the literature as if it still was a problem. Before ~2013 or so, it still was a major problem. It’s 2021 now and access is not such a big deal any more.
  2. They mostly seem to implicitly assume that scholarly articles must be expensive. That may have been the case for subscription articles. It’s 2021 now and one can actually look up prices online and find out that we’ve been overpaying by a factor of around ten.
  3. These discussions also universally seem to tacitly endorse the perspective that the journals they want to ‘transition’ are somehow actually worth transitioning. That may have been a quaint but admirable position before the so-called ‘replication crisis’ emerged, but today it is 2021 and we know our journals are a major contributing factor to unreliability.

So, to me, it appears some pernicious time machine must be keeping them from entering the present. Which bold superheroine can travel back in time and bring them home into the present?

If participants in these discussions could be brought to acknowledge that 2021 is actually a thing that is happening right now, certainly they would be headed in a completely different direction, one would hope?

Like this:

Like Loading...
Posted on July 19, 2021 at 13:46 2 Comments
May12

Minimizing the collective action problem

In: science politics • Tags: collective action, funders, funding agencies, infrastructure, mandates

Most academics would agree that the way scholarship is done today, in the broadest, most general terms, is in dire need of modernization. Problems abound from counter-productive incentives, inefficiencies, lack of reproducibility, to an overemphasis on competition at the expense of cooperation, or a technically antiquated digital infrastructure that charges to much and provides only few useful functionalities. Many a scholar have expressed the issue in terms of a social dilemma: the proximate interests of each individual actor, be it institution or individual are not aligned with the distal interests of the collective knowledge creation process. In other words, collective action is needed, where multiple actors act collectively in order to mitigate against the potential detrimental effects for each individual actor. This insight has been articulated for various aspects of scholarly reform, be it our literature, research quality, or scholarship in general. Collective action solutions can have various implementations from a simple aspiration such as “if only everybody would do X” to a contractual agreement among individuals to form a collective with a specific purpose, goals and means or strategies to achieve them. They all have in common that a sufficient number of actors need to act simultaneously enough in order to overcome the proximal obstacles.

Most of the numbers to express the size of this collective action problem are not so difficult to obtain. The main actors in this problem are of course individual researchers, but also the institutions at which they work and here in particular their libraries which hold the purse strings for much of the digital infrastructure that is in need of reform. Finally, research funding agencies (funders) not only provide key incentives for researchers but also decide which institutions are eligible for funding.

Citing OECD statistics, the 2018 STM report lists about 7.1 million full-time equivalent researchers globally. Given that many of these will be part time employees and sites like ResearchGate list about 17 million users, adding at least 50% to these 7 million or a total of about 11 million currently active, individual researchers is probably required for a lower bound estimate.

The number of universities is likely a reasonable approximation for the number of libraries. Webometrics lists about 30,000 universities globally.

The number of funding agencies is less easy to estimate. There is no list of funding agencies. The organization of research funding varies between countries such that research funds come both directly from the research institution and from funders that are often national, but also international in reach. Funders can be governmental or philanthropic. With about 200 countries in the world and each country with at least one, possibly more funding agencies, 400-500 funding agencies with at least roughly comparable influence over institutions and individual researchers is probably a reasonable estimate.

Taken together, we are looking at more than ten million researchers, 30,000 institutions but not even 1,000 funding agencies. These numbers only denote relative sizes as only the end of the collective action problem is given by the absolute size of the collective. The number required to start collective action in a way that leads to pervasive reform is not known. Given the massive disparities between research intensive countries and the rest of the world, whatever the this theoretical fraction of the collective joining the “early movers” will likely by further reduced if the “early movers” were largely composed of actors from leading research nations. Thus, if one considered only the most ‘influential’ 10% of all actors needing to be involved for a snowball effect of change to sweep the globe, the numbers change to one million researchers, three thousand institutions or just 40 funders. Thus, even at such dramatically reduced sizes, both individual researchers and their institutions appear as unreasonably large targets for any meaningful short to mid-term impact. Funders, however, not only represent a manageable size for a collective action problem, they ought to also have an intrinsic interest that their money is spent with the interest of the public in mind and not necessarily that of the individual researchers as long as it remains at odds with that of the public that provides the funds.

All of these numbers corroborate a well-known realization, flippantly formulated below:

What could funders do in order to initiate collective reform? Clearly, the mission of funding agencies is to fund research projects, not infrastructure. Just spending their money on infrastructure instead of research projects would be contrary to their mission and deteriorate the already in many places already precipitously low funding rates. If anything, funding agencies should increase their funding rates and not decrease them even further below already unhealthy levels. Funding agencies have a range of orthogonal options to shape research policies at institutions at their disposal. For instance, funders set very specific eligibility criteria that institutions need to fulfill in order for their members to receive grant funding. If these eligibility criteria were to exclude institutions which fund or otherwise support counter-productive incentives, irreproducible science or wasteful and dysfunctional infrastructure, these institutions would be incentivized to become eligible again. Funding agencies could thus use their eligibility criteria to reward and penalize desired and counter-productive institutional policies, respectively. Historically, such eligibility criteria have been used very effectively to ensure minimum infrastructure standards. Today, just like seemingly everything in this domain, these criteria just urgently need an overhaul. Some funders are already updating their policies in this regards, such as the DFG, the Wellcome Trust or the Templeton World Charity Foundation.

The question remains whether other funding agencies are aware of their power and willing to initiate the procedures required for modernizing their eligibility criteria? The formation of initiatives such as “Plan S” shows that the general understanding of the need for modernization together with a willingness for collective action is present in some of these funders. The general feasibility of such a “Plan I” (for infrastructure) was also already acknowledged. With understanding, general willingness and feasibility established, it remains to be seen if funding agencies consider the current threats to publicly funded science serious and urgent enough to warrant the effort to improve their eligibility criteria.

Thus, institutions need to modernize their infrastructure such that researchers are enabled to modernize their scholarship. They have now had more than 30 years for this modernization and neither of them have acted. At this point it is fair to assume, barring some major catastrophe forcing their hands, that such modernization is not going to magically appear within the next three decades, either. Funders, therefore, are in a position to incentivize this long overdue modernization, not by mandating 11 million individuals, but by mandating institutions to finally implement long overdue modernizations – modernizations which institutions and hence researchers have been too complacent or too reticent to tackle.

If we are faced with a collective action problem, as I would tend to agree, and the size of the collective is the major determinant for effective problem solving, then it is a short step to realize that funders are in a uniquely suited position to start solving this collective action problem. Conversely, then, it is only legitimate to question the motives of those who seek to make the collective action problem unnecessary difficult by advocating to target individual researchers or institutions. What could possibly be the benefit of making the collective action problem numerically more difficult to solve?

Like this:

Like Loading...
Posted on May 12, 2021 at 18:02 4 Comments
Mar10

Scholarly publishing in three cartoons

In: science politics • Tags: cartoon, humor, infrastructure, publishing

The academic journal publishing system sure feels all too often a bit like a sinking boat:

  • we have a reproducibility leak
  • an affordability leak
  • a functionality leak
  • a data leak
  • a code leak
  • an interoperability leak
  • a discoverability leak
  • a peer-review leak
  • a long-term preservation leak
  • a link rot leak
  • an evaluation/assessment leak
  • a data visualization leak
  • …
  • …
  • …
  • and even a tiny access leak still remains even after 30 years of trying to fix it.

While some people are trying to keep the boat afloat, most are just happy if they stay dry. Without taking the boat on land to fix it, the leaks just keep springing up, making it harder and harder to bail the water out.

https://i.imgur.com/POng9wL.jpeg

A platform solution (i.e., what people have called a “Global Open Archive” in 2010 or what is now often called a “Next Generation Repository” and where one current implementation could be, e.g., Open Research Central) that would address all of these problems, would feel a little bit like a fighter jet, compared to that sinking boat:

https://www.militaryimages.net/media/eurofighter-typhoon-cutaway.103882/full?d=1521516966

Alas, there are many academics who say that we can convert our sinking boat into the jet just by bailing faster, tackling one leak at a time, with some plywood, duct tape, only the wet staff and without ever needing to take the boat out of the water. They may be correct, but I, for one, am not wasting my time on such odds.

Making matters worse, most academics are precariously employed (or their dependent co-workers are), leaving the leak-fixing and jet-building to the privileged few:

https://pbs.twimg.com/media/DTrAG6XW4AUrDTe.jpg

Right now, it looks as if the boat needs to sink completely before we will ever start working on the jet.

Like this:

Like Loading...
Posted on March 10, 2021 at 10:22 2 Comments
Jan06

Can funders mandate institutions?

In: science politics • Tags: funders, mandates, Plan I

For a few years now I have been arguing that in order to accomplish change in scholarly infrastructure, it likely is an inefficient plan by funding agencies to mandate the least powerful players in the game, authors (i.e., their grant recipients). The legacy publishing system still exists because institutions pay for its components, publishers. As for-profit corporations, as long as the money is flowing, publishers could not care any less what is on the pages the publish, making author behavior all but irrelevant. Moreover, so-called Big Deal packages (both for subscriptions and for open-access publishing) cement the status of publishers, irrespective of what portfolio they are currently offering.

In this situation, the only way in which forcing authors to publish in certain ways makes any sense at all is politically: there is little authors can do other than to comply, as they have no power and need the funding agencies in order to be able to do their research. So from this perspective, authors are the weakest link and the one with the least capabilities to raise any effective objections other than venting online. As we have seen from such previous mandates, most notably the NIH open access mandate from 2008, the amount of change to be gained from such mandates is rather modest: some percentage of articles may get accessible to the non-subscribing public at some point, while the rest of the problems we have accumulated remain firmly cemented in place. As described before, access to the literature currently ranks among the least worrisome problems we have in scholarship.

this means that while in 2008 the increase in access to a portion of the medical literature brought about by the NIH mandate may have been a significant step forward, today, more than a decade later and with no major paywall obstacles to speak of any more, one may wonder whether a marginal improvement on a nearly negligible problem is worth all the efforts currently put into mandates such as those of, e.g., Plan S.

Especially when looking at the kind of change that could be brought about by shifting the massive funds institutions pay for publishers (about ten times the actual publishing costs), it is straightforward to ask, whether Plan S is more Sympolpolitik (a German saying for symbolic politics that only shows off its attempts at making politics rather than to achieve actual change) than Realpolitik?

When confronted with this question on Twitter, one of the main designers of Plan S, President of Science Europe Marc Schiltz, first tried to deny that funding agencies even have the capability of mandating institutions to do anything:

I then explained that at least in Germany, the DFG, for instance, mandates that institutions follow good scientific practice and that the DFG guidelines for good scientific practice have to be incorporated into any institution’s policies in a legally binding manner, if their applications are to be considered for funding. Above and beyond such general mandates, there can also be specific mandates, such as those for Next Generation Sequencing machines, where only researchers can apply, whose institutions have the specifically required infrastructure. I had talked to a program director at the NSF, who confirmed that similar requirements were in place also at the NSF and NIH in the US.

Confronted with such a falsification of his initial claim, Marc Schiltz then retreated to the position that not all funding agencies are like the DFG, NIH or NSF:

And then asserted that he couldn’t even think of a way how funding agencies could prevent institutions from paying for legacy publishing infrastructure, even if funding agencies had the power to mandate institutions to invest in modern infrastructure:

Of course, once the general concept of funding agencies deciding which institutions are worthy of getting their money is established, it is technically easy to come up with ways in which funding agencies could influence also institutional decisions about funding for legacy systems.

For instance, it would be technically easy (but practically more or less difficult depending on the funder) to influence institutional decisions with regard to subscription payments. Ulrich’s list of serials currently lists over 30k peer-reviewed scholarly journals. Funders could influence how many subscriptions an institution can have until they become ineligible for funding. Or, in places where overhead payments are under the control of the funder, they could start with a softer approach, e.g., cut all overhead payments to institutions who do not report the number of subscriptions to journals, or which exceed, say, 10k journal subscriptions. Institutions with less than 10k subscriptions but more than 5k get 50% of the regular overhead, those with more than 500 but less than 5k get 75% of the overhead, those with less than 500 get 90% and only those without any subscriptions get the full overhead payment.

APC payments can be treated analogously, of course, either by looking at the volume, i.e., number of articles, or the amount spent. In a similar way, institutions that do not report any such figures in a verifiable way become ineligible for funding.

A step down from that, it would be easier for funding agencies to ensure that institutions invest in modern digital infrastructure, by making institutions ineligible for funding that do not have, e.g., a platform where text/narrative, data and code are taken care of and interoperate. Most, if not all funders already have an eligibility requirement for “basic infrastructure”, so they would only need to update that and specify what qualifies as “basic infrastructure” (surely, this would be different today than, say, 30 years ago). User uptake of this basic infrastructure is then ensured in the Liège way, i.e., by funders simply ignoring any research output that is not on such a platform when assessing grant proposals. Such a procedure has been proven very effective for over a decade now, by all the institutions that have implemented one. Centralized access to such a decentralized scholarly library could be established via technologies such as Open Research Central, or analogous implementations.

Easier still, funders could decide that only researchers at institutions that are DORA signatories are eligible for funding. Not at a DORA institution? Too bad, don’t even bother to apply. That would be a truly massive blow to the pernicious power of journal rank! It is clear that this is one of the also practically easiest options as some funding agencies have already implemented such mandates. Funders such as Wellcome or Templeton World Charity Foundation already today require their recipient institutions to sign DORA or an equivalent:

“All Wellcome-funded organisations must also publicly […] sign the San Francisco Declaration on Research Assessment, Leiden Manifesto or equivalent.”
“We expect [Templeton World Charity Foundation] Grantees that are research institutions to have a statement of commitment to implementing the DORA principles on their website – this should be prominent and accessible”

Why are the remaining funding agencies not following suit? Why do most funding agencies still fund institutions that have not signed DORA?

By now, it is becoming quite clear that actual change in infrastructure reform does not require new technology, nor additional funds and does not require neither the mind of an Einstein nor the imagination of a Picasso to come up with ways to mandate change where recalcitrant institutions have proven, time and again over the last three decades, that voluntary change is out of the question. Of course, funding agencies may claim that it is time consuming, practically difficult or a lot of work to implement such mandates on institutions. However, what do such excuses mean other than these funders simply not considering the trifecta of the main problems scholarship is facing today (reliability, affordability and functionality) as being important enough to make this extra effort?

In order to convince Marc Schiltz and perhaps cOAlition S (and, in consequence, other funders world-wide) that it is indeed in their power to make change happen without oppressing the least powerful, my task is now to compile a list of cOAlition S members that differentiates between those that spend their funds on any institution, no strings attached, and those funders that have implemented eligibility criteria such as those of the DFG, NSF and NIH. This list is currently under construction (thanks for any help!). So far, it looks as if the different cOAlition S funders are not nearly as different as Marc Schiltz claimed. For most funding agencies I could quickly find a page detailing the eligibility criteria or requirements. Any help in completing this effort is appreciated.

If the results do not change dramatically from what I could find, it appears that the only thing these funding organizations need to tweak is their eligibility criteria/requirements (e.g., by making their “basic infrastructure” requirements more specific), in order to accomplish change much more effectively (affecting all participating institutions at once) than with mandates on individual grant recipients. Why did they not go that route in the first place? What are they waiting for now?

UPDATE:

In something that must be exceedingly rare on Twitter, Marc Schiltz finally conceded that mandating institutions would be a feasible route and also confirmed that “soft” requirements such as only funding DORA signatory institutions would be an easier first target than, e.g., mandating institutions to stop payments to legacy publishers:

Like this:

Like Loading...
Posted on January 6, 2021 at 12:29 Comments Off on Can funders mandate institutions?
Dec09

High APCs are a feature, not a bug

In: science politics • Tags: APCs, nature, open access, publishers

There has been some outrage at the announcement that Nature is following through with their 2004 declaration of charging ~10k ($/€) in article processing charges (APCs). However, not only have these charges been 16 years in the making but the original declaration was made not on some obscure blog, but at a UK parliamentary inquiry. So nobody could rightfully claim that we couldn’t have seen this development coming from miles away.

In fact, already more than 10 years ago, such high APCs were very much welcomed, as people thought they could bring about change in scholarly publishing. Some examples, starting with Peter Suber in 2009:

As soon as we shift costs from the reader side to the author side, then, we create market pressure to keep them low enough to attract rather than deter authors.  […] precisely because high prices in an OA world would exclude authors, and not merely readers, there is a natural, market-based check on excessive prices.  BTW, I’m not saying that these market forces will keep prices within reach of all authors

Cameron Neylon, 2010:

I have heard figures of around £25,000 given as the level of author charge that would be required to sustain Cell, Nature, or Science as Open Access APC supported journals. This is usually followed by a statement to the effect “so they can’t possibly go OA because authors would never pay that much”.
[…]
If authors were forced to make a choice between the cost of publishing in these top journals versus putting that money back into their research they would choose the latter. If the customer actually had to make the choice to pay the true costs of publishing in these journals, they wouldn’t.
[…]
Subscription charges as a business model have allowed an appallingly wasteful situation to continue unchecked because authors can pretend that there is no difference in cost to where they publish, they accept that premium offerings are value for money because they don’t have to pay for them. Make them make the choice between publishing in a “top” journal vs a “quality” journal and getting another few months of postdoc time and the equation changes radically.
[…]
We need a market where the true costs are a factor in the choices of where, or indeed whether, to formally publish scholarly work.

Mike Taylor, 2012:

the bottom line is that paying at publication time is a sensible approach. It gives us what we want (freedom to use research), and provides publishers with a realistic revenue stream that, unlike subscriptions, is subject to market forces.

Stephen Pinfield, 2013:

But Gold OA is not like [subscription]. It has the potential to reintroduce genuine competition into the journal market with authors sensitive to price making choices about where they place their articles. If journals put APCs up, authors can go elsewhere and the adjustments can happen quickly.
[…]
Gold OA, on the other hand should make price changes clearer –and customers will be able to respond accordingly.

Danny Kingsley, 2014:

However the increase in the payment of APCs has awoken many researchers’ awareness of the costs of publication. An author’s reaction of surprise to the request for a US$3000 [approx. AU$3180] APC when they are contacted by a publisher provides an opportunity for the library to discuss the costs associated with publication. There is an argument that as payment for publication at an article level becomes more prevalent, it gives the researcher an opportunity to determine value for money and in some arguments this means that scholarly publishing would be a more functional market

Convinced that authors would be price sensitive, it was even mentioned as a problem, if someone would pay the APCs for the authors. Kingsley:

This is then one of the disadvantages of having centralised management of APCs – it once again quarantines the researcher from the cost of publication

Pinfield:

But there is a danger with many of the processes now being established by universities to pay for APCs on behalf of authors. These systems, which will allow payment to be made centrally often with block pre-payments to publishers, will certainly save the time of authors and therefore ought to be pursued, but they do run the risk of once again separating researchers from the realities of price in a way that could recreate some of the systemic failures of the subscription market.They need to be handled with caution.

Hindsight is always 2020. Little did people know back then that authors, when faced with a choice, would actually tend to pay the most expensive APCs they could afford, because such is the nature (pardon) of a prestige market. It is as such not surprising to us now, that rich institutions hail the 10k price tag for Nature APCs as “very attractive” and “not a problem”.

The above were just the few examples I could readily find with the help of Richard Poynder, Danny Kingsley, Cameron Neylon and Mike Taylor. The sentiments expressed there were pretty much mainstream and discussed widely even before 2009 (and you can sense that in the referenced sources). Thus, the idea that high APCs are a feature and not a bug, thought to bring competition into a market was a driving force for gold OA for a long time. Even today, you can still find people claiming that “If we switch from subscription publishing to pay-to-publish open access, [publisher profit] margins are likely to drop to 10%-20%.” (Lenny Teytelman as late as 2019). We now see that this view was spectacularly wrong. People will pay no matter what when their livelihoods are at stake, even if it costs them their last shirt (German saying). We ought to think hard what the consequences now have to be, of having been so catastrophically wrong.

This is how Nature‘s high APCs came about. Many thought it was a good thing and kept pushing for them until Nature gave in.

Update: One of the quoted authors, Mike Taylor, just confirmed that he still thinks 10k APCs are good and that the outrage people feel at their livelihoods being put in jeopardy is what will drive change:

Like this:

Like Loading...
Posted on December 9, 2020 at 10:52 5 Comments
Nov30

Are Nature’s APCs ‘outrageous’ or ‘very attractive’?

In: science politics • Tags: APCs, nature, open access, publishers

Last week, there was a lot of outrage at the announcement of Nature’s new pricing options for their open access articles. People took to twitter to voice their, ahem, concern. Some examples:

There are many more that all express their outrage at the gall of Nature to charge their authors these sums,. even Forbes interviewed some of them. At the same time, the people who have been paying publishers these sums for decades find that the ~10k per paper is actually “very attractive” and “not a problem”.

So which is it? Are Nature’s APCs ‘outrageous’ or are the prices ‘very attractive and not a problem’?

What is clear is that these charges are definitely not a surprise. Already back in 2004, in a Parliamentary inquiry in the UK, Nature provided testimony that they would have to charge 10-30k for a Nature paper, given their revenues at the time (i.e., their subscription and advertising income). While back then, most people scoffed at the numbers and expected that no author would ever pay such fees, Nature got to work and invented a whole host of ‘lesser’ journals (i.e., Nature XYC as in “Nature Physics”, Nature Genetics” and so on), which would serve several purposes at once: They increase revenue. As hand-me-down journals they keep desperate authors attached to the Nature brand. As less selective journals, they would bring down average costs per article for the brand total, when they would need to go open access.

So this year, after open access advocates, funders and the now also pandemic-stricken public had kept demanding open access for 16 years after they had been warned, Nature was finally ready to deliver. Due to the dilution of their costs by way of the ‘lesser’ journals, they managed to keep their APCs close to their lower bounds of 2004, despite 16 years of inflation. Given that libraries have been paying these kinds of funds for Nature journals for decades, this price tag then really is a bargain, all things considered.

Given this analysis, all the online outrage strikes me as unwarranted. While I of course agree that it should have never come so far (we ought to have realized where this is headed already in 2004!), crying foul now comes about 16 years too late. We have had plenty of time to prepare for this, we have had more than enough time to change course or think of alternative ways to the legacy publishers. And yet, nearly everybody kept pushing in the same direction anyway, when we could have known in 2004 that this was not going to end well. The people warning of such not quite so unintended consequences were few and far between.

Having only gotten interested in these topics around 2007 or so myself, it took me until 2012 to understand that this kind of APC-OA was not sustainable and, indeed, would stand to make everything worse just in order to worship the sacred open access cow.

If you were even later to the party and are outraged now, direct your outrage not to Nature, who are only following external pressures, but to those who exert said pressures, such as open access advocates pushing for APC-OA and funders mandating authors to publish in such journals.

Like this:

Like Loading...
Posted on November 30, 2020 at 10:30 5 Comments
Oct26

Is the SNSI the new PRISM?

In: science politics • Tags: PRISM, publishers, sci-hub, SNSI

Just before Christmas 2019, the Washington Post reported, based on “people familiar with the matter”, that the US Justice Department were investigating the Sci-Hub founder Alexandra Elbakyan for potentially “working with Russian intelligence to steal U.S. military secrets from defense contractors”. Besides such a highly unusual connection, the article also reiterated unsubstantiated (but mainly circulated by publishers) allegations that access to scholarly journals was obtained via her ‘hacking’ skills. The article also cited a “former senior U.S. intelligence official” that he believed Elbakyan was working with the Russian foreign intelligence service GRU. Apparently, the investigation had been ongoing since 2014, but now, in 2020, there is still no publicly available evidence as to what this investigation has been able to find.

And yet, despite no evidence, on the very next day after the Washington Post story, Elsevier was all too happy to find their oft-repeated but little-believed claims of Sci-Hub being dangerous vindicated and exclaimed “This represents a threat to academic institutions”. Elsevier, after winning a lawsuit that failed to materialize any of the millions it sought, finally had external support to bolster their claims that Sci-Hub was not only a threat to their bottom line, but to research integrity!

Less than two months later, Nick Fowler, chief academic officer at Elsevier, announced the new Scholarly Networks Security Initiative (SNSI), under the title “Working together to protect from cyber attacks”. Fowler was assisted by SpringerNature chief publishing officer Steven Inchcoombe. Both introduced themselves as co-chairs of SNSI. One aspect mentioned in the article was that “Awareness of the damage Sci-Hub is inflicting on institutions and academia needs to be increased.” The idea being that publishers and institutional libraries work together to fight a common enemy.

This public relations aspect of the SNSI is what needs to receive special attention. On the face of it, Sci-Hub is an enabling technology: before Sci-Hub, scholars needed subscriptions to access the scholarly literature; now, subscriptions have become optional. In many countries, this has led to new initiatives and consortia finally toughening their stance in library-publisher negotiations. What in the previous three decades was a walk in the park, followed by ever climbing profit margins now stands to be a tough negotiation. Sci-Hub thus has had opposite effects on libraries and publishers: while libraries need not fear lapses of access as much as previously, allowing them to be bolder in their negotiations, publishers wonder why anybody should pay for their offerings at all, if their customers can have all the scholarly content of the world for free.

When it turned out that the lawsuits against Elbakyan would neither lead to any damages being paid nor have a deterring effect on libraries or their patrons, and when initiatives asking for a tougher stance against publishers garnered more and more support, publishers devised a new strategy. They would try and paint Sci-Hub not only as a threat for them, but also for the libraries. Rumors started spreading unsubstantiated claims that Sci-Hub had obtained their login-credentials, with which they were populating their databases, not by donations, but by phishing attacks. As there was no evidence, there was little uptake or discussion. One may assume that with Sci-Hub being around since 2011, noticeable consequences on library behavior starting around 2012/13, by 2017/18, when the phishing rumors failed to gain traction, publishers must have been fairly frustrated that their usual power over academics seemed on the decline for the first time in decades.

Perhaps the feeling of frustration was similar around 2005, when the Open Access movement, invigorated by the Budapest (2001) and Berlin (2003) declarations, continued to garner steam. Also then, the publishers’ attempts at painting Open Access to scholarly works as a threat to research integrity failed to rouse support and slow the momentum of the OA movement. Just before the launch of the NIH OA mandate in the US, the American Association of Publishers (AAP) decided they needed something that would really get their message across and in 2006 started the Partnership for Research Integrity in Science and Medicine (PRISM) Coalition*. They hired “the pit bull of public relations”, Eric Dezenhall, to create a smear campaign hat would strive to equate public access with junk science. In particular with regard to OA mandates, part of the plan was for publishers to partner with anti-science organizations, which shared their anti-government sentiment. The aim was to bring institutions on board to save research integrity together with the publishers.

Thus, in both instances, public access to scholarly works (whether via OA or Sci-Hub) posed only a threat to publishers and in both instances, the publishers sought to paint themselves as chiefly concerned not about their bottom line but about “research integrity”. Compare the statement on the PRISM website:

The Partnership for Research Integrity in Science and Medicine (PRISM) was formed to advocate for policies that ensure the quality, integrity, and economic viability of peer-reviewed journals.

with statements on the SNSI site:

Scholarly Networks Security Initiative (SNSI) brings together publishers and institutions to solve cyber-challenges threatening the integrity of the scientific record, scholarly systems and the safety of personal data.

This past week, these public relations efforts were dialed up a notch or ten to a whole new level. At an SNSI webinar entitled „Cybersecurity Landscape – Protecting the Scholarly Infrastructure“, hosted by two Elsevier employees, one of the presenters suggested to „develop or subsidize a low cost proxy or a plug-in to existing proxies“ in order to collect user data. That user data, it was explained, could be analyzed with an “Analysis Engine” to track biometric data (e.g., typing speed) or suspicious behavior (e.g., a pharmacology student being suspiciously interested in astrophysics). The angle towards Sci-Hub was confirmed by the next speaker, an Ex-FBI agent and security analyst.

Considering the track record of academic publishers, this reeks strongly of PR attempts to ‘soften the target’, i.e., to make installing publisher spyware on university servers sound less outrageous than it actually is. After the PRISM debacle, the publishers now seem to have learned from their PR mistakes. This time, there is no ‘pitbull’ around. This time, there is only a strange article in a major newspaper, a shady institute where it appears hard to find out who founded it, who is running it and who funds it.

SNSI is an apparent PR project aimed at compromising, not strengthening, network security at research institutions. However, unlike with PRISM, this time the PR effort may pay off.


* It has to be noted that one of the AAP publishers in the PRISM Coalition was Elsevier, who had so much disdain for research integrity that they had published a nine fake journals from 2000 until 2005. In other words, in one year, they stop publishing their fake journals, in the next, they join a PR campaign in which research integrity is the central tenet.

Like this:

Like Loading...
Posted on October 26, 2020 at 16:58 10 Comments
  • Page 4 of 21
  • « First
  • «
  • 2
  • 3
  • 4
  • 5
  • 6
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,963 views)
  • Sci-Hub as necessary, effective civil disobedience (22,948 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,486 views)
  • Booming university administrations (12,907 views)
  • What should a modern scientific infrastructure look like? (11,459 views)
  • Edgewise
  • Embrace the uncertainty
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

The Drosophila Flight Simulator 2.0
The Drosophila Flight Simulator 2.0

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d