bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 85 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 383 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 658 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1556 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1498 downloads 0.00 KB
Download
Sep23

Algorithmic employment decisions in academia?

In: science politics • Tags: employment, publishers, surveillance, tracking

According to a recent study, employee surveillance is rampant in today’s corporate work environment. This study documents how, often under the pretense of cybersecurity or risk analysis (sort of like academic publishers, actually), companies analyze the behavioral data they collect from their employees to help them make “evidence-led”, i.e., algorithmic employment decisions. Some of the tools are used to assign risk scores to employees and to categorize them into risk groups:

Image

A company in ‘Workplace Analytics’ is selling a product called ‘OccupEye’ that tracks employees with movement sensors mounted under their desks. They have partnered with network giant Cisco that uses WiFi routers to derive movement patterns within rooms and buildings. Besides movement detection and text analysis, mouse-clicks and keystrokes (also as suggested by academic publishers) can be analyzed and incorporated into the algorithms as well, in order to, e.g., classify employees as “Low Performer”, “Good Performer” or “High Performer”, as Zalando is doing (p. 135 in the study).

Surely, academia would never use performance metrics for their hire and fire decisions? OK, bad joke.

Those academics who lament that not being allowed to use journal rank and other metrics to evaluate other researchers would deprive them of objective means to distinguish between applicants may very soon have all their wishes fulfilled and then some.

Ever since we calculated that only a very small percentage of what our libraries are paying publishers goes towards publishing costs and profits, many (including us) have wondered, where the rest of the money is going. Interestingly, Elsevier’s Paul Abrahams, who commented in the discussion, did not mention where their non-publishing costs accrue, but stated that their overall rejection rate was 77% at Elsevier. Using current market rates for publishing services and assuming that Elsevier does not spend more on publishing than current market rates, I calculated that publishing an average article likely costs Elsevier US$574.74 (our scenario B), i.e., very close to the estimated average per-article publishing costs for the industry:

Image

If Elsevier’s revenue per article also were to clock in at the industry-average of around US$4,000, then each article would provide Elsevier with about US$1,200 in profits, given their profit margins of just above 30%. If our industry-average calculations really were to match the numbers of industry giant Elsevier that well, one may wonder what Elsevier is doing with the remaining US$2,200 per article that go neither towards publishing nor towards profits?

Of course, we can only guess, but for clues, one may look at Elsevier’s tweet from yesterday:

https://twitter.com/ElsevierConnect/status/1440632982990049297

Elsevier states that “knowledge and analytics” is what they do today – publishing is only mentioned as their “roots” in the past, not something that is relevant today or, let alone, in the future. The link Elsevier promotes in that tweet points to a page on their own site where it also says: “Elsevier has really positioned itself as a data science company” and Elsevier adds right below it: “Evidence-led decisions”.

Image

What sort of “data analytics” is used for what kind of “evidence-led decisions”, you may ask? I’m getting to that.

The acquisitions of Elsevier parent company RELX confirm this transition to “data analytics” and “evidence-led decisions”:

Image
source: https://knowledgegap.org/index.php/sub-projects/rent-seeking-and-financialization-of-the-academic-publishing-industry/preliminary-findings/

For 20 years now, RELX and Elsevier have invested in academic data analytics. But what kind of data are they analyzing and for which “evidence-led decisions”? Funny you should ask: they are analyzing your data of course! Here is a graph that shows what Elsevier has been buying over these last two decades when you (like me) may have been too busy fighting for #openaccess to pay attention (grey box denotes just publishing):

Image
source: https://knowledgegap.org/index.php/sub-projects/rent-seeking-and-financialization-of-the-academic-publishing-industry/preliminary-findings/

Whenever you interact with any of these tools, your user data is being collected and analyzed by Elsevier. After all, this is their core business, as they themselves state. Elsevier’s parent RELX is one of the largest data brokers on the planet and owns, among others, Lexis Nexis. This means that RELX can now combine your professional data with your private data and sell it. For instance, Lexis Nexis is selling their collected data and analytics to law enforcement agencies that are not allowed to collect such data themselves. Why shouldn’t Elsevier/RELX sell these data right back to academic institutions from where at least the professional part of these data has been harvested?

That may sound dystopian to some, but if you are a dean or a provost or a member in a search or tenure committee, the above may actually sound quite alluring. In that case, you may perhaps feel relieve that very soon now the possibility becomes available to license products that allow you to use not just publication records, but real-time professional and private user data to automate the classification of applicants and help you make “evidence-led” employment or funding decisions. No more need to invite anybody for interviews or be surprised by the political positions of future faculty. Moreover, you can rest assured that acquisition, development and implementation of these tools was sponsored by the tax-payer money your institution has been overspending on publishers in the past. Obviously, for you and people like you, this was money well spent! This nice fact ensures that, just like the publications in the last decades, your institution will be paying twice for such useful tools: once for their acquisition/implementation and then again, when your institution actually licenses them.

If you happen to find these possibilities enticing, all you need to do is – nothing: just let your institution keep overpaying publishers and soon the rebate offers for faculty surveillance tools will start flowing in. After all, every institution in the Netherlands has already licensed such a tool in exchange for open access publishing with, you guessed it, Elsevier. So all you need to do is sit still and wait for big brother to swing by your revered academic institution and say ‘hello’ also to you. And just in case you think only Elsevier is using our payments to expand their vertical integration, think again:

This post is probably best closed by quoting the Science Foundation Ireland who, after the Netherlands, have also partnered with Elsevier for similar reasons, it seems:

We have partnered with Elsevier, which is probably best known as a scientific publisher. They have access to a vast array of data, and this will help us to establish where Ireland is good and nearly good in emergent and convergent technologies. It will also help us decide on the actions we need to take to make us really good. For example, we might see a certain field where we are nearly good at present and find that we need to recruit top talent or run new funding calls to support it.

Like this:

Like Loading...
Posted on September 23, 2021 at 11:45 2 Comments
Jul19

What sinister time machine?

In: science politics • Tags: infrastructure, time travel, transformative agreements

A recent OASPA guest post reminded me of something I have been wondering about for several years now. What sinister time travel device is keeping some sections of the scholarly ecosystem from leaving the past and coming back to the present? The guest post only stands in for many international examples of what most of the discussions about “Transformative Agreements” reverberate around and only serves as the latest prompt for me to finally jot these observations down. There are three things these discussions on transformative agreements have in common and that crop up time and again:

  1. They all seem to treat access to the literature as if it still was a problem. Before ~2013 or so, it still was a major problem. It’s 2021 now and access is not such a big deal any more.
  2. They mostly seem to implicitly assume that scholarly articles must be expensive. That may have been the case for subscription articles. It’s 2021 now and one can actually look up prices online and find out that we’ve been overpaying by a factor of around ten.
  3. These discussions also universally seem to tacitly endorse the perspective that the journals they want to ‘transition’ are somehow actually worth transitioning. That may have been a quaint but admirable position before the so-called ‘replication crisis’ emerged, but today it is 2021 and we know our journals are a major contributing factor to unreliability.

So, to me, it appears some pernicious time machine must be keeping them from entering the present. Which bold superheroine can travel back in time and bring them home into the present?

If participants in these discussions could be brought to acknowledge that 2021 is actually a thing that is happening right now, certainly they would be headed in a completely different direction, one would hope?

Like this:

Like Loading...
Posted on July 19, 2021 at 13:46 2 Comments
May12

Minimizing the collective action problem

In: science politics • Tags: collective action, funders, funding agencies, infrastructure, mandates

Most academics would agree that the way scholarship is done today, in the broadest, most general terms, is in dire need of modernization. Problems abound from counter-productive incentives, inefficiencies, lack of reproducibility, to an overemphasis on competition at the expense of cooperation, or a technically antiquated digital infrastructure that charges to much and provides only few useful functionalities. Many a scholar have expressed the issue in terms of a social dilemma: the proximate interests of each individual actor, be it institution or individual are not aligned with the distal interests of the collective knowledge creation process. In other words, collective action is needed, where multiple actors act collectively in order to mitigate against the potential detrimental effects for each individual actor. This insight has been articulated for various aspects of scholarly reform, be it our literature, research quality, or scholarship in general. Collective action solutions can have various implementations from a simple aspiration such as “if only everybody would do X” to a contractual agreement among individuals to form a collective with a specific purpose, goals and means or strategies to achieve them. They all have in common that a sufficient number of actors need to act simultaneously enough in order to overcome the proximal obstacles.

Most of the numbers to express the size of this collective action problem are not so difficult to obtain. The main actors in this problem are of course individual researchers, but also the institutions at which they work and here in particular their libraries which hold the purse strings for much of the digital infrastructure that is in need of reform. Finally, research funding agencies (funders) not only provide key incentives for researchers but also decide which institutions are eligible for funding.

Citing OECD statistics, the 2018 STM report lists about 7.1 million full-time equivalent researchers globally. Given that many of these will be part time employees and sites like ResearchGate list about 17 million users, adding at least 50% to these 7 million or a total of about 11 million currently active, individual researchers is probably required for a lower bound estimate.

The number of universities is likely a reasonable approximation for the number of libraries. Webometrics lists about 30,000 universities globally.

The number of funding agencies is less easy to estimate. There is no list of funding agencies. The organization of research funding varies between countries such that research funds come both directly from the research institution and from funders that are often national, but also international in reach. Funders can be governmental or philanthropic. With about 200 countries in the world and each country with at least one, possibly more funding agencies, 400-500 funding agencies with at least roughly comparable influence over institutions and individual researchers is probably a reasonable estimate.

Taken together, we are looking at more than ten million researchers, 30,000 institutions but not even 1,000 funding agencies. These numbers only denote relative sizes as only the end of the collective action problem is given by the absolute size of the collective. The number required to start collective action in a way that leads to pervasive reform is not known. Given the massive disparities between research intensive countries and the rest of the world, whatever the this theoretical fraction of the collective joining the “early movers” will likely by further reduced if the “early movers” were largely composed of actors from leading research nations. Thus, if one considered only the most ‘influential’ 10% of all actors needing to be involved for a snowball effect of change to sweep the globe, the numbers change to one million researchers, three thousand institutions or just 40 funders. Thus, even at such dramatically reduced sizes, both individual researchers and their institutions appear as unreasonably large targets for any meaningful short to mid-term impact. Funders, however, not only represent a manageable size for a collective action problem, they ought to also have an intrinsic interest that their money is spent with the interest of the public in mind and not necessarily that of the individual researchers as long as it remains at odds with that of the public that provides the funds.

All of these numbers corroborate a well-known realization, flippantly formulated below:

What could funders do in order to initiate collective reform? Clearly, the mission of funding agencies is to fund research projects, not infrastructure. Just spending their money on infrastructure instead of research projects would be contrary to their mission and deteriorate the already in many places already precipitously low funding rates. If anything, funding agencies should increase their funding rates and not decrease them even further below already unhealthy levels. Funding agencies have a range of orthogonal options to shape research policies at institutions at their disposal. For instance, funders set very specific eligibility criteria that institutions need to fulfill in order for their members to receive grant funding. If these eligibility criteria were to exclude institutions which fund or otherwise support counter-productive incentives, irreproducible science or wasteful and dysfunctional infrastructure, these institutions would be incentivized to become eligible again. Funding agencies could thus use their eligibility criteria to reward and penalize desired and counter-productive institutional policies, respectively. Historically, such eligibility criteria have been used very effectively to ensure minimum infrastructure standards. Today, just like seemingly everything in this domain, these criteria just urgently need an overhaul. Some funders are already updating their policies in this regards, such as the DFG, the Wellcome Trust or the Templeton World Charity Foundation.

The question remains whether other funding agencies are aware of their power and willing to initiate the procedures required for modernizing their eligibility criteria? The formation of initiatives such as “Plan S” shows that the general understanding of the need for modernization together with a willingness for collective action is present in some of these funders. The general feasibility of such a “Plan I” (for infrastructure) was also already acknowledged. With understanding, general willingness and feasibility established, it remains to be seen if funding agencies consider the current threats to publicly funded science serious and urgent enough to warrant the effort to improve their eligibility criteria.

Thus, institutions need to modernize their infrastructure such that researchers are enabled to modernize their scholarship. They have now had more than 30 years for this modernization and neither of them have acted. At this point it is fair to assume, barring some major catastrophe forcing their hands, that such modernization is not going to magically appear within the next three decades, either. Funders, therefore, are in a position to incentivize this long overdue modernization, not by mandating 11 million individuals, but by mandating institutions to finally implement long overdue modernizations – modernizations which institutions and hence researchers have been too complacent or too reticent to tackle.

If we are faced with a collective action problem, as I would tend to agree, and the size of the collective is the major determinant for effective problem solving, then it is a short step to realize that funders are in a uniquely suited position to start solving this collective action problem. Conversely, then, it is only legitimate to question the motives of those who seek to make the collective action problem unnecessary difficult by advocating to target individual researchers or institutions. What could possibly be the benefit of making the collective action problem numerically more difficult to solve?

Like this:

Like Loading...
Posted on May 12, 2021 at 18:02 4 Comments
Mar10

Scholarly publishing in three cartoons

In: science politics • Tags: cartoon, humor, infrastructure, publishing

The academic journal publishing system sure feels all too often a bit like a sinking boat:

  • we have a reproducibility leak
  • an affordability leak
  • a functionality leak
  • a data leak
  • a code leak
  • an interoperability leak
  • a discoverability leak
  • a peer-review leak
  • a long-term preservation leak
  • a link rot leak
  • an evaluation/assessment leak
  • a data visualization leak
  • …
  • …
  • …
  • and even a tiny access leak still remains even after 30 years of trying to fix it.

While some people are trying to keep the boat afloat, most are just happy if they stay dry. Without taking the boat on land to fix it, the leaks just keep springing up, making it harder and harder to bail the water out.

https://i.imgur.com/POng9wL.jpeg

A platform solution (i.e., what people have called a “Global Open Archive” in 2010 or what is now often called a “Next Generation Repository” and where one current implementation could be, e.g., Open Research Central) that would address all of these problems, would feel a little bit like a fighter jet, compared to that sinking boat:

https://www.militaryimages.net/media/eurofighter-typhoon-cutaway.103882/full?d=1521516966

Alas, there are many academics who say that we can convert our sinking boat into the jet just by bailing faster, tackling one leak at a time, with some plywood, duct tape, only the wet staff and without ever needing to take the boat out of the water. They may be correct, but I, for one, am not wasting my time on such odds.

Making matters worse, most academics are precariously employed (or their dependent co-workers are), leaving the leak-fixing and jet-building to the privileged few:

https://pbs.twimg.com/media/DTrAG6XW4AUrDTe.jpg

Right now, it looks as if the boat needs to sink completely before we will ever start working on the jet.

Like this:

Like Loading...
Posted on March 10, 2021 at 10:22 2 Comments
Jan06

Can funders mandate institutions?

In: science politics • Tags: funders, mandates, Plan I

For a few years now I have been arguing that in order to accomplish change in scholarly infrastructure, it likely is an inefficient plan by funding agencies to mandate the least powerful players in the game, authors (i.e., their grant recipients). The legacy publishing system still exists because institutions pay for its components, publishers. As for-profit corporations, as long as the money is flowing, publishers could not care any less what is on the pages the publish, making author behavior all but irrelevant. Moreover, so-called Big Deal packages (both for subscriptions and for open-access publishing) cement the status of publishers, irrespective of what portfolio they are currently offering.

In this situation, the only way in which forcing authors to publish in certain ways makes any sense at all is politically: there is little authors can do other than to comply, as they have no power and need the funding agencies in order to be able to do their research. So from this perspective, authors are the weakest link and the one with the least capabilities to raise any effective objections other than venting online. As we have seen from such previous mandates, most notably the NIH open access mandate from 2008, the amount of change to be gained from such mandates is rather modest: some percentage of articles may get accessible to the non-subscribing public at some point, while the rest of the problems we have accumulated remain firmly cemented in place. As described before, access to the literature currently ranks among the least worrisome problems we have in scholarship.

this means that while in 2008 the increase in access to a portion of the medical literature brought about by the NIH mandate may have been a significant step forward, today, more than a decade later and with no major paywall obstacles to speak of any more, one may wonder whether a marginal improvement on a nearly negligible problem is worth all the efforts currently put into mandates such as those of, e.g., Plan S.

Especially when looking at the kind of change that could be brought about by shifting the massive funds institutions pay for publishers (about ten times the actual publishing costs), it is straightforward to ask, whether Plan S is more Sympolpolitik (a German saying for symbolic politics that only shows off its attempts at making politics rather than to achieve actual change) than Realpolitik?

When confronted with this question on Twitter, one of the main designers of Plan S, President of Science Europe Marc Schiltz, first tried to deny that funding agencies even have the capability of mandating institutions to do anything:

I then explained that at least in Germany, the DFG, for instance, mandates that institutions follow good scientific practice and that the DFG guidelines for good scientific practice have to be incorporated into any institution’s policies in a legally binding manner, if their applications are to be considered for funding. Above and beyond such general mandates, there can also be specific mandates, such as those for Next Generation Sequencing machines, where only researchers can apply, whose institutions have the specifically required infrastructure. I had talked to a program director at the NSF, who confirmed that similar requirements were in place also at the NSF and NIH in the US.

Confronted with such a falsification of his initial claim, Marc Schiltz then retreated to the position that not all funding agencies are like the DFG, NIH or NSF:

And then asserted that he couldn’t even think of a way how funding agencies could prevent institutions from paying for legacy publishing infrastructure, even if funding agencies had the power to mandate institutions to invest in modern infrastructure:

Of course, once the general concept of funding agencies deciding which institutions are worthy of getting their money is established, it is technically easy to come up with ways in which funding agencies could influence also institutional decisions about funding for legacy systems.

For instance, it would be technically easy (but practically more or less difficult depending on the funder) to influence institutional decisions with regard to subscription payments. Ulrich’s list of serials currently lists over 30k peer-reviewed scholarly journals. Funders could influence how many subscriptions an institution can have until they become ineligible for funding. Or, in places where overhead payments are under the control of the funder, they could start with a softer approach, e.g., cut all overhead payments to institutions who do not report the number of subscriptions to journals, or which exceed, say, 10k journal subscriptions. Institutions with less than 10k subscriptions but more than 5k get 50% of the regular overhead, those with more than 500 but less than 5k get 75% of the overhead, those with less than 500 get 90% and only those without any subscriptions get the full overhead payment.

APC payments can be treated analogously, of course, either by looking at the volume, i.e., number of articles, or the amount spent. In a similar way, institutions that do not report any such figures in a verifiable way become ineligible for funding.

A step down from that, it would be easier for funding agencies to ensure that institutions invest in modern digital infrastructure, by making institutions ineligible for funding that do not have, e.g., a platform where text/narrative, data and code are taken care of and interoperate. Most, if not all funders already have an eligibility requirement for “basic infrastructure”, so they would only need to update that and specify what qualifies as “basic infrastructure” (surely, this would be different today than, say, 30 years ago). User uptake of this basic infrastructure is then ensured in the Liège way, i.e., by funders simply ignoring any research output that is not on such a platform when assessing grant proposals. Such a procedure has been proven very effective for over a decade now, by all the institutions that have implemented one. Centralized access to such a decentralized scholarly library could be established via technologies such as Open Research Central, or analogous implementations.

Easier still, funders could decide that only researchers at institutions that are DORA signatories are eligible for funding. Not at a DORA institution? Too bad, don’t even bother to apply. That would be a truly massive blow to the pernicious power of journal rank! It is clear that this is one of the also practically easiest options as some funding agencies have already implemented such mandates. Funders such as Wellcome or Templeton World Charity Foundation already today require their recipient institutions to sign DORA or an equivalent:

“All Wellcome-funded organisations must also publicly […] sign the San Francisco Declaration on Research Assessment, Leiden Manifesto or equivalent.”
“We expect [Templeton World Charity Foundation] Grantees that are research institutions to have a statement of commitment to implementing the DORA principles on their website – this should be prominent and accessible”

Why are the remaining funding agencies not following suit? Why do most funding agencies still fund institutions that have not signed DORA?

By now, it is becoming quite clear that actual change in infrastructure reform does not require new technology, nor additional funds and does not require neither the mind of an Einstein nor the imagination of a Picasso to come up with ways to mandate change where recalcitrant institutions have proven, time and again over the last three decades, that voluntary change is out of the question. Of course, funding agencies may claim that it is time consuming, practically difficult or a lot of work to implement such mandates on institutions. However, what do such excuses mean other than these funders simply not considering the trifecta of the main problems scholarship is facing today (reliability, affordability and functionality) as being important enough to make this extra effort?

In order to convince Marc Schiltz and perhaps cOAlition S (and, in consequence, other funders world-wide) that it is indeed in their power to make change happen without oppressing the least powerful, my task is now to compile a list of cOAlition S members that differentiates between those that spend their funds on any institution, no strings attached, and those funders that have implemented eligibility criteria such as those of the DFG, NSF and NIH. This list is currently under construction (thanks for any help!). So far, it looks as if the different cOAlition S funders are not nearly as different as Marc Schiltz claimed. For most funding agencies I could quickly find a page detailing the eligibility criteria or requirements. Any help in completing this effort is appreciated.

If the results do not change dramatically from what I could find, it appears that the only thing these funding organizations need to tweak is their eligibility criteria/requirements (e.g., by making their “basic infrastructure” requirements more specific), in order to accomplish change much more effectively (affecting all participating institutions at once) than with mandates on individual grant recipients. Why did they not go that route in the first place? What are they waiting for now?

UPDATE:

In something that must be exceedingly rare on Twitter, Marc Schiltz finally conceded that mandating institutions would be a feasible route and also confirmed that “soft” requirements such as only funding DORA signatory institutions would be an easier first target than, e.g., mandating institutions to stop payments to legacy publishers:

Like this:

Like Loading...
Posted on January 6, 2021 at 12:29 Comments Off on Can funders mandate institutions?
Dec09

High APCs are a feature, not a bug

In: science politics • Tags: APCs, nature, open access, publishers

There has been some outrage at the announcement that Nature is following through with their 2004 declaration of charging ~10k ($/€) in article processing charges (APCs). However, not only have these charges been 16 years in the making but the original declaration was made not on some obscure blog, but at a UK parliamentary inquiry. So nobody could rightfully claim that we couldn’t have seen this development coming from miles away.

In fact, already more than 10 years ago, such high APCs were very much welcomed, as people thought they could bring about change in scholarly publishing. Some examples, starting with Peter Suber in 2009:

As soon as we shift costs from the reader side to the author side, then, we create market pressure to keep them low enough to attract rather than deter authors.  […] precisely because high prices in an OA world would exclude authors, and not merely readers, there is a natural, market-based check on excessive prices.  BTW, I’m not saying that these market forces will keep prices within reach of all authors

Cameron Neylon, 2010:

I have heard figures of around £25,000 given as the level of author charge that would be required to sustain Cell, Nature, or Science as Open Access APC supported journals. This is usually followed by a statement to the effect “so they can’t possibly go OA because authors would never pay that much”.
[…]
If authors were forced to make a choice between the cost of publishing in these top journals versus putting that money back into their research they would choose the latter. If the customer actually had to make the choice to pay the true costs of publishing in these journals, they wouldn’t.
[…]
Subscription charges as a business model have allowed an appallingly wasteful situation to continue unchecked because authors can pretend that there is no difference in cost to where they publish, they accept that premium offerings are value for money because they don’t have to pay for them. Make them make the choice between publishing in a “top” journal vs a “quality” journal and getting another few months of postdoc time and the equation changes radically.
[…]
We need a market where the true costs are a factor in the choices of where, or indeed whether, to formally publish scholarly work.

Mike Taylor, 2012:

the bottom line is that paying at publication time is a sensible approach. It gives us what we want (freedom to use research), and provides publishers with a realistic revenue stream that, unlike subscriptions, is subject to market forces.

Stephen Pinfield, 2013:

But Gold OA is not like [subscription]. It has the potential to reintroduce genuine competition into the journal market with authors sensitive to price making choices about where they place their articles. If journals put APCs up, authors can go elsewhere and the adjustments can happen quickly.
[…]
Gold OA, on the other hand should make price changes clearer –and customers will be able to respond accordingly.

Danny Kingsley, 2014:

However the increase in the payment of APCs has awoken many researchers’ awareness of the costs of publication. An author’s reaction of surprise to the request for a US$3000 [approx. AU$3180] APC when they are contacted by a publisher provides an opportunity for the library to discuss the costs associated with publication. There is an argument that as payment for publication at an article level becomes more prevalent, it gives the researcher an opportunity to determine value for money and in some arguments this means that scholarly publishing would be a more functional market

Convinced that authors would be price sensitive, it was even mentioned as a problem, if someone would pay the APCs for the authors. Kingsley:

This is then one of the disadvantages of having centralised management of APCs – it once again quarantines the researcher from the cost of publication

Pinfield:

But there is a danger with many of the processes now being established by universities to pay for APCs on behalf of authors. These systems, which will allow payment to be made centrally often with block pre-payments to publishers, will certainly save the time of authors and therefore ought to be pursued, but they do run the risk of once again separating researchers from the realities of price in a way that could recreate some of the systemic failures of the subscription market.They need to be handled with caution.

Hindsight is always 2020. Little did people know back then that authors, when faced with a choice, would actually tend to pay the most expensive APCs they could afford, because such is the nature (pardon) of a prestige market. It is as such not surprising to us now, that rich institutions hail the 10k price tag for Nature APCs as “very attractive” and “not a problem”.

The above were just the few examples I could readily find with the help of Richard Poynder, Danny Kingsley, Cameron Neylon and Mike Taylor. The sentiments expressed there were pretty much mainstream and discussed widely even before 2009 (and you can sense that in the referenced sources). Thus, the idea that high APCs are a feature and not a bug, thought to bring competition into a market was a driving force for gold OA for a long time. Even today, you can still find people claiming that “If we switch from subscription publishing to pay-to-publish open access, [publisher profit] margins are likely to drop to 10%-20%.” (Lenny Teytelman as late as 2019). We now see that this view was spectacularly wrong. People will pay no matter what when their livelihoods are at stake, even if it costs them their last shirt (German saying). We ought to think hard what the consequences now have to be, of having been so catastrophically wrong.

This is how Nature‘s high APCs came about. Many thought it was a good thing and kept pushing for them until Nature gave in.

Update: One of the quoted authors, Mike Taylor, just confirmed that he still thinks 10k APCs are good and that the outrage people feel at their livelihoods being put in jeopardy is what will drive change:

Like this:

Like Loading...
Posted on December 9, 2020 at 10:52 5 Comments
Nov30

Are Nature’s APCs ‘outrageous’ or ‘very attractive’?

In: science politics • Tags: APCs, nature, open access, publishers

Last week, there was a lot of outrage at the announcement of Nature’s new pricing options for their open access articles. People took to twitter to voice their, ahem, concern. Some examples:

There are many more that all express their outrage at the gall of Nature to charge their authors these sums,. even Forbes interviewed some of them. At the same time, the people who have been paying publishers these sums for decades find that the ~10k per paper is actually “very attractive” and “not a problem”.

So which is it? Are Nature’s APCs ‘outrageous’ or are the prices ‘very attractive and not a problem’?

What is clear is that these charges are definitely not a surprise. Already back in 2004, in a Parliamentary inquiry in the UK, Nature provided testimony that they would have to charge 10-30k for a Nature paper, given their revenues at the time (i.e., their subscription and advertising income). While back then, most people scoffed at the numbers and expected that no author would ever pay such fees, Nature got to work and invented a whole host of ‘lesser’ journals (i.e., Nature XYC as in “Nature Physics”, Nature Genetics” and so on), which would serve several purposes at once: They increase revenue. As hand-me-down journals they keep desperate authors attached to the Nature brand. As less selective journals, they would bring down average costs per article for the brand total, when they would need to go open access.

So this year, after open access advocates, funders and the now also pandemic-stricken public had kept demanding open access for 16 years after they had been warned, Nature was finally ready to deliver. Due to the dilution of their costs by way of the ‘lesser’ journals, they managed to keep their APCs close to their lower bounds of 2004, despite 16 years of inflation. Given that libraries have been paying these kinds of funds for Nature journals for decades, this price tag then really is a bargain, all things considered.

Given this analysis, all the online outrage strikes me as unwarranted. While I of course agree that it should have never come so far (we ought to have realized where this is headed already in 2004!), crying foul now comes about 16 years too late. We have had plenty of time to prepare for this, we have had more than enough time to change course or think of alternative ways to the legacy publishers. And yet, nearly everybody kept pushing in the same direction anyway, when we could have known in 2004 that this was not going to end well. The people warning of such not quite so unintended consequences were few and far between.

Having only gotten interested in these topics around 2007 or so myself, it took me until 2012 to understand that this kind of APC-OA was not sustainable and, indeed, would stand to make everything worse just in order to worship the sacred open access cow.

If you were even later to the party and are outraged now, direct your outrage not to Nature, who are only following external pressures, but to those who exert said pressures, such as open access advocates pushing for APC-OA and funders mandating authors to publish in such journals.

Like this:

Like Loading...
Posted on November 30, 2020 at 10:30 5 Comments
Oct26

Is the SNSI the new PRISM?

In: science politics • Tags: PRISM, publishers, sci-hub, SNSI

Just before Christmas 2019, the Washington Post reported, based on “people familiar with the matter”, that the US Justice Department were investigating the Sci-Hub founder Alexandra Elbakyan for potentially “working with Russian intelligence to steal U.S. military secrets from defense contractors”. Besides such a highly unusual connection, the article also reiterated unsubstantiated (but mainly circulated by publishers) allegations that access to scholarly journals was obtained via her ‘hacking’ skills. The article also cited a “former senior U.S. intelligence official” that he believed Elbakyan was working with the Russian foreign intelligence service GRU. Apparently, the investigation had been ongoing since 2014, but now, in 2020, there is still no publicly available evidence as to what this investigation has been able to find.

And yet, despite no evidence, on the very next day after the Washington Post story, Elsevier was all too happy to find their oft-repeated but little-believed claims of Sci-Hub being dangerous vindicated and exclaimed “This represents a threat to academic institutions”. Elsevier, after winning a lawsuit that failed to materialize any of the millions it sought, finally had external support to bolster their claims that Sci-Hub was not only a threat to their bottom line, but to research integrity!

Less than two months later, Nick Fowler, chief academic officer at Elsevier, announced the new Scholarly Networks Security Initiative (SNSI), under the title “Working together to protect from cyber attacks”. Fowler was assisted by SpringerNature chief publishing officer Steven Inchcoombe. Both introduced themselves as co-chairs of SNSI. One aspect mentioned in the article was that “Awareness of the damage Sci-Hub is inflicting on institutions and academia needs to be increased.” The idea being that publishers and institutional libraries work together to fight a common enemy.

This public relations aspect of the SNSI is what needs to receive special attention. On the face of it, Sci-Hub is an enabling technology: before Sci-Hub, scholars needed subscriptions to access the scholarly literature; now, subscriptions have become optional. In many countries, this has led to new initiatives and consortia finally toughening their stance in library-publisher negotiations. What in the previous three decades was a walk in the park, followed by ever climbing profit margins now stands to be a tough negotiation. Sci-Hub thus has had opposite effects on libraries and publishers: while libraries need not fear lapses of access as much as previously, allowing them to be bolder in their negotiations, publishers wonder why anybody should pay for their offerings at all, if their customers can have all the scholarly content of the world for free.

When it turned out that the lawsuits against Elbakyan would neither lead to any damages being paid nor have a deterring effect on libraries or their patrons, and when initiatives asking for a tougher stance against publishers garnered more and more support, publishers devised a new strategy. They would try and paint Sci-Hub not only as a threat for them, but also for the libraries. Rumors started spreading unsubstantiated claims that Sci-Hub had obtained their login-credentials, with which they were populating their databases, not by donations, but by phishing attacks. As there was no evidence, there was little uptake or discussion. One may assume that with Sci-Hub being around since 2011, noticeable consequences on library behavior starting around 2012/13, by 2017/18, when the phishing rumors failed to gain traction, publishers must have been fairly frustrated that their usual power over academics seemed on the decline for the first time in decades.

Perhaps the feeling of frustration was similar around 2005, when the Open Access movement, invigorated by the Budapest (2001) and Berlin (2003) declarations, continued to garner steam. Also then, the publishers’ attempts at painting Open Access to scholarly works as a threat to research integrity failed to rouse support and slow the momentum of the OA movement. Just before the launch of the NIH OA mandate in the US, the American Association of Publishers (AAP) decided they needed something that would really get their message across and in 2006 started the Partnership for Research Integrity in Science and Medicine (PRISM) Coalition*. They hired “the pit bull of public relations”, Eric Dezenhall, to create a smear campaign hat would strive to equate public access with junk science. In particular with regard to OA mandates, part of the plan was for publishers to partner with anti-science organizations, which shared their anti-government sentiment. The aim was to bring institutions on board to save research integrity together with the publishers.

Thus, in both instances, public access to scholarly works (whether via OA or Sci-Hub) posed only a threat to publishers and in both instances, the publishers sought to paint themselves as chiefly concerned not about their bottom line but about “research integrity”. Compare the statement on the PRISM website:

The Partnership for Research Integrity in Science and Medicine (PRISM) was formed to advocate for policies that ensure the quality, integrity, and economic viability of peer-reviewed journals.

with statements on the SNSI site:

Scholarly Networks Security Initiative (SNSI) brings together publishers and institutions to solve cyber-challenges threatening the integrity of the scientific record, scholarly systems and the safety of personal data.

This past week, these public relations efforts were dialed up a notch or ten to a whole new level. At an SNSI webinar entitled „Cybersecurity Landscape – Protecting the Scholarly Infrastructure“, hosted by two Elsevier employees, one of the presenters suggested to „develop or subsidize a low cost proxy or a plug-in to existing proxies“ in order to collect user data. That user data, it was explained, could be analyzed with an “Analysis Engine” to track biometric data (e.g., typing speed) or suspicious behavior (e.g., a pharmacology student being suspiciously interested in astrophysics). The angle towards Sci-Hub was confirmed by the next speaker, an Ex-FBI agent and security analyst.

Considering the track record of academic publishers, this reeks strongly of PR attempts to ‘soften the target’, i.e., to make installing publisher spyware on university servers sound less outrageous than it actually is. After the PRISM debacle, the publishers now seem to have learned from their PR mistakes. This time, there is no ‘pitbull’ around. This time, there is only a strange article in a major newspaper, a shady institute where it appears hard to find out who founded it, who is running it and who funds it.

SNSI is an apparent PR project aimed at compromising, not strengthening, network security at research institutions. However, unlike with PRISM, this time the PR effort may pay off.


* It has to be noted that one of the AAP publishers in the PRISM Coalition was Elsevier, who had so much disdain for research integrity that they had published a nine fake journals from 2000 until 2005. In other words, in one year, they stop publishing their fake journals, in the next, they join a PR campaign in which research integrity is the central tenet.

Like this:

Like Loading...
Posted on October 26, 2020 at 16:58 10 Comments
Oct13

Come and do research with us!

In: news • Tags: position, postdoc

Trial and error is a successful problem-solving strategy not only in humans but throughout evolution. How do nervous systems generate novel, creative trials and how are errors incorporated into already existing experiences in order to improve future trials? We use a variety of transgenic tools, mathematical analyses, connectomics and behavioral physiology to understand the neurobiology of spontaneous behavior, learning and adaptive behavioral choice.

We offer a fully-funded postdoctoral position to participate in this research field. The successful candidate not only gets to choose their project between mathematical analyses of spontaneous behavior in transgenic animals or connectomics of interacting learning systems, but they also get to practice open science in an international research team in a brand new building with state-of-the-art infrastructure. To top it off, the duration of the position (underlying German time limits) allows the successful applicant to learn how to write their own grant applications, establish their own research group and, if desired, obtain a German ‘habilitation’ degree.

Research questions

Our research has discovered evolutionary conserved mechanisms for the generation of spontaneous behavior and feedback-based learning mechanisms. Besides trial and error, spontaneous behavior provides organisms with adaptive unpredictability, crucial in competitive situations such as evolution. The first possible project builds on our work analyzing the spontaneous turning behavior of tethered Drosophila and will identify the neurons and their neurophysiological mechanisms mediating behavioral choice.

The second project will identify the connectivity of the mushroom-body output neurons mediating the hierarchical interaction between fact- and skill-learning that regulates habit formation.

Requirements

The successful candidate will have a PhD (or be close to completion) in a relevant field of neuroscience, biology, psychology or physics, coding experience in R, Python, Julia, or equivalent, good written/oral communications skills in English, and also, ideally, practical experience working with Drosophila.

Our lab

Our lab prioritizes inclusion and diversity to achieve excellence in research and to foster an intellectual climate that is welcoming and nurturing. We are based at the University of Regensburg, an equal opportunity employer with over 20,000 students and more than 1500 faculty, in Regensburg, Bavaria, Germany.

Please send your application with your CV and a short reference to one of our publications to my institutional address (bjoern.brembs@ur.de). Applications will be considered until the position is filled, but applications before November 1, 2020 will receive preferential treatment.

P.S.: I was told by a reader that I must add the following statement:

You should mention that Regensburg is an incredibly nice city with high quality of life! Affordable, safe, cultural, civil, great food, and close to other great cities like Prague and Munich.

Like this:

Like Loading...
Posted on October 13, 2020 at 15:46 Comments Off on Come and do research with us!
Oct05

How academic institutions neglect their duty

In: science politics • Tags: infrastructure, mandates, policies

Think, check, submit: who hasn’t heard of this mantra to help researchers navigate the jungle of commercial publishers? Who isn’t under obligation to publish in certain venues, be it because employers ask for a particular set of journals for hiring, tenure or promotion, or because of funders‘ open access mandates? Researchers today are stuck between a rock of confusing publishing options and a hard place of often mutually exclusive policy requirements and ethical considerations when all they want is to publish a scholarly article. Seasoned researchers may have developed heuristics from experience to cope with the complexity, but early-career researchers need guidance and support.

In addition to the constraints on publishing the text summaries of their work, researchers are also facing an increasingly complex ecosystem of domain-specific and domain general databases to make their research data FAIR – findable, accessible, interoperable and re-usable. In part, this is connected to the situation in article publishing, as some journals require data deposition upon publication. For a number of years now, funders have also begun to ask for data management plans upon submission of grant proposals and their good scientific practice guidelines require researchers to archive and make their data accessible for at least ten years. Many of these guidelines need to be ratified by academic institutions if they want to remain eligible for receiving funds from funding agencies. Researchers now face similarly complex constraints on their research data needs as on article publishing. Consequently, as with article publishing, courses, webinars and workshops are springing up to educate researchers on all the many different options and constraints they face in data management and sharing.

In many cases, data sharing is futile without providing at least some code or software to make the data accessible. In other cases, the code is the scholarship that is being published in the scholarly article. In all of these cases, code sharing is as mandatory as data or text publication. In addition to such practical necessities, there are also mandates and policies requiring archiving of all research works, including code. As in articles and data, research now also face the question of where to publish their code: in one of the commercial tools such as BitBucket or GitHub or in one of the many GitLab instances that are mushrooming everywhere now or in some other venue?

Imagine if there were a similar balkanization of providers and mutually exclusive policies for other services such as, say, email. Researchers would have to identify an email provider that is either compliant with all institutions and funders (unlikely), or use different providers and addresses for different institutions and funders. Imagine an analogously balkanized situation in professional email correspondence as the current messenger market with WhatsApp, GroupMe, Signal, Slack, Mattermost, RocketChat, etc. all being isolated and non-interoperable. Imagine institutions where researchers would have to dodge similar slings and arrows just to provide their laboratories with electricity, water or gas? Imagine institutions leaving their researchers alone to fend for their own HVAC, furniture, sewage or recyclables? How much research would the researchers at such institutions still be able to do?

There is a reason academic institutions are providing a basic infrastructure for housing, electricity, HVAC, water, etc. for their researchers. Academic institutions have a mission of research and teaching that is best accomplished by allowing its members to focus on the mission, rather than its corollaries. Today, what could be a more basic digital infrastructure than one that takes care of the primary research products – text, data and code? Clearly, such an infrastructure must be considered more basic and mission-driven than email. With this understanding it becomes obvious that the current wild-west situation for our primary research products constitutes a clear dereliction of duty by our academic institutions. Institutions need to provide their researchers with a technologically adequate, affordable infrastructure that a) automates the tasks around text, data and code sharing, b) ensures compliance with the various policies and c) protects the privacy of researchers and their human research subjects/patients. The implementation of such an infrastructure has been overdue for nearly 30 years now.

As the technology for such an infrastructure is available off the shelf and institutions are spending multiple amounts of what would be required on legacy publishers, there remain only social obstacles as to why academic institutions keep neglecting their researchers. Given that institutions have now failed for about 30 years to overcome these obstacles, it is straightforward to propose that mandates and policies be put in place to force institutions (and not researchers!) to change their ways and implement such a basic infrastructure.

Like this:

Like Loading...
Posted on October 5, 2020 at 09:40 Comments Off on How academic institutions neglect their duty
  • Page 4 of 21
  • « First
  • «
  • 2
  • 3
  • 4
  • 5
  • 6
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,714 views)
  • Sci-Hub as necessary, effective civil disobedience (22,933 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,437 views)
  • Booming university administrations (12,901 views)
  • What should a modern scientific infrastructure look like? (11,431 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous biting in the marine snail Aplysia
Spontaneous biting in the marine snail Aplysia

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d