bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 170 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 89 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 197 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 503 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 750 downloads 0.00 KB
Download
Oct17

With CRISPRed FoxP and habit formation to #SfN19

In: own data • Tags: CRISPR, Drosophila, mushroom bodies, operant, poster, SfN

Tomorrow we travel to the annual meeting of the Society for Neuroscience and our diligent scientists have already printed their posters!

Ottavia Palazzo will present her work on genome editing the FoxP locus of Drosophila with anatomical and behavioral characterizations of the various transgenic lines she has created. Spoiler: we now know the expression pattern of Foxp generally and of isoform B specifically and show that some of the behavioral phenotypes associated with manipulating the gene do not show up when the manipulation happens in the adult stage, rather than through development. She will present on Monday afternoon at Z14.

Clicking on this thumbnail will bring you to the abstract and a download of the poster

Anders Eriksson will present the results of his screen for which class of mushroom body output neurons (MBONs) is involved in the regulation of habit formation in flies. Learning can take place very quickly, yet habits take a long time (or many repetitions) to form. The process of habit formation is slow, because the activity of another form of learning, operant world-learning takes place at the same time and inhibits habit formation via the mushroom bodies. Spoiler: so far it looks as if MBON-2, 15 and 17 regulate habit formation, but this is subject to confirmation in follow-up experiments. He will also present on Monday afternoon, but a few steps away from Ottavia, at Z20.

Clicking on this thumbnail will bring you to the abstract and a download of the poster

The QR codes on the posters should get you directly to the PDF download.

Like this:

Like Loading...
Posted on October 17, 2019 at 16:32 Comments Off on With CRISPRed FoxP and habit formation to #SfN19
Oct14

Scholarship has bigger fish to fry than access

In: science politics • Tags: infrastructure, mandates, publishing

Around the globe, there are initiatives and organizations devoted to bring “Open Access” to the world, i.e., the public availability of scholarly research works, free of charge. However, the current debate seems to largely miss the point that human readers (there is still a problem for content mining) are already enjoying such public access for the huge majority of scholarly works since about 2013, due to several technical developments providing temporary work-arounds for publisher paywalls.

For various reasons, people (that includes many long-time OA activists) still publicly claim we need open access, when all we need is a different kind of open access to the one we currently already enjoy. The core of the access problem itself has actually been solved for the last 6-7 years, but (likely) only temporarily.

Of course, this realization dramatically changes the whole issue. For the last 6-7 years, paying for subscriptions has ceased to be necessary for access. One sign of the changing times is the support that initiatives such as DEAL, Bibsam etc. have: two years without subscriptions to Elsevier and what do you hear out of, e.g., Germany? Crickets! Nothing! Of course, it would be silly to conclude that in these two years nobody in Germany has read any Elsevier articles. The reason for the silence and the continued support for DEAL is that we now can access anything we want without subscriptions. The old adage that “everybody who needs access has access”, wrong prior to 2012 because of subscriptions, is now finally true despite subscriptions! DEAL et al.’s strong negotiation position would not have been possible, or even thinkable prior to 2012 and the single reason is that subscriptions have been rendered redundant for access.

But not only has the access problem decreased in size and prevalence dramatically, other problems have since surfaced that loom much larger than access, even if there had been no such technical developments.

  1. The reliability of the scientific literature appears to be much lower than expected –  and what use is an unreliable literature, accessible to the public? In particular, the publish-or-perish culture centered around journal rank is set to reward unreliable science and punish meticulous scientists, contributing a major socioeconomic driver for what some already call a replication crisis.
  2. Moreover, with the advent of APC-OA, the problem of affordability has come to the fore also for scholars, when before it was largely a libraries’ problem. Publishing costs of under 500€ an article, but prices of more than twenty times that (e.g., Nature branded journals and others) scare the scholarly community: in the future, will only rich labs/individuals/institutions be able to afford publishing in the prestigious journals without which nobody can survive in academia? Given that subscription costs are largely opaque and subscriptions themselves no longer necessary, of course there is huge resistance to something that is bound to make things worse from not only the point of view of authors. Not surprisingly, people have a hard time understanding why such change is needed.
  3. Finally, while billions are being spent on subscriptions that nobody needs any more, hardly anything is spent on the kind of infrastructures that are crucial for our work: databases, code-sharing sites, etc. Scholarship lacks even the most basic functionalities for some of the most crucial fruits of our labor: text, data and code.

The main issues any modernization of the scholarly infrastructure today needs to address are thus comprised by the RAF crisis: Reliability, Affordability and Functionality. Approach them with a modern infrastructure solution and the kind of access to the scholarly literature we currently enjoy will be perpetuated as a side effect.

In Europe, the lack of functionalities has been acknowledged, in particular for research data and now, slowly, also for scientific code and software. In part, the European Open Science Cloud (EOSC) is intended to address these problems. However, what we need is not a piecemeal hodgepodge of stand-alone computational solutions (which is the direction EOSC appears to be headed right now), we need to have a seamless infrastructure where we can integrate data and code into our texts. And this is where scholarly publishing can’t be seen as a standalone problem any longer, but as an integral part of a large-scale infrastructure crisis facing text, data and code with a core focus on reliability, affordability and functionality.

Taking the above together, it becomes clear that one of the major obstacles towards infrastructure reform on the decision-maker side is probably that EOSC on the one hand and DEAL, PlanS and the other initiatives on the other are seen and act as if they were addressing separate problems.

With the realization that EOSC; Plan S, DEAL, etc. are actually working on different aspects of the same issue, the problem to be solved is no longer that scholars publish in toll-access journals, but that institutions haven’t come up with a more attractive alternative. If individuals are not to blame, than there is no reason to mandate them to do anything differently. Instead, institutions should be mandated to stop funding journals via subscriptions or APCs and instead invest the money into a modern, more cost-effective infrastructure for text, data and code. Obviously, in this specificity, this is nearly impossible to mandate in most countries. However, there is a mandate that comes very close. It has been dubbed “Plan I” (for infrastructure). In brief, it entails a three step procedure:

  1. Build on already available standards and guidelines to establish a certification process for a sustainable scholarly infrastructure
  2. Funders require institutional certification before reviewing grant applications
  3. Institutions use subscription funds to implement infrastructure for certification

Many or most funding agencies already have (largely unenforced) infrastructure requirements, so step one is halfway done already. Step two is just the required enforcement step and step three will come out of necessity as few public institutions will have the funds available to implement the certification quickly. If deadlines were short and funders would recommend using subscription/APC funds for the implementation, the funds could be shifted rapidly from legacy publishing to service providing.

In fact, this system is already working for some sub-disciplines, it just needs to be expanded. I was able to observe how effective it is at my own university: Before considering applications for next-generation genome sequencing machines needed by our biology and medicine departments, the DFG requires (this would be the equivalent to point 2 in the three points above) applicants to certify that they work at an institution with a so-called ‘core facility’ to handle the massive amounts of data generated by these machines. The DFG has a very detailed list of requirements for such facilities in terms of hardware and staffing (equivalent to point 1 in the three points above). There is now a high-level task force within the two departments to find/shift funds and staff (point 3 above) to create four permanent positions and implement the computational infrastructure even before a single line of an application is even written. This example shows that the three points outlined above are already happening around the world with many funding agencies and merely have to be expanded to cover all fields of scholarship. It was the overt activism that led to the sudden flow of funds and creation of positions (where there usually is a chronic shortage of both!!), that prompted the idea for Plan I. Institutions will move heaven and earth to keep research funds flowing. If funders say “jump!”, institutions ask “how high?”. In this case, institutions have both the expertise and the funds (both within their libraries) to quickly and painlessly implement these modern technologies – it should be in the self-interest of any funding agency to help them set the correct priorities.

Such funder requirements would tackle all three main infrastructure problems head on: they would promote the reliability of science by eliminating journals as the basis for journal rank which rewards unreliable science and punishes reliable science. They would approach the affordability problem by introducing open standards-based competition and substitutability to a largely monopoly-based market. In fact, the European Commission’s Directorate General for Competition has explicitly suggested such measures for initiatives such as EOSC and Plan S. Finally, it would bring many new functionalities to not only our text-based narratives, but also our audio and visual narratives as well as, most needed, provide stable and sustainable infrastructure for research data and code.

Oh, and of course, the text-based narratives, interactively combined with our data and code (e.g., via living figures), would be publicly accessible and machine readable for content mining, as an added side-benefit.

Like this:

Like Loading...
Posted on October 14, 2019 at 14:03 2 Comments
Oct02

Is Open Access headed for a cost explosion?

In: science politics • Tags: costs, open access, publishing

By now, it is public knowledge that subscription prices for scholarly journals have been rising beyond inflation for decades (i.e., the serials crisis):

serials crisis

A superficially very similar graph was recently published for APC price increases:

source: https://doi.org/10.18352/lq.10280

When not paying too much attention, both figures seem to indicate a linear increase in costs over time for both business models. However, the situation is more complicated than that. For one, the Y-axis on the subscription graph indicates percent increase per year, so this is not a linear scale when one plots the numbers in actual currency. Moreover, the subscription graph plots total cost to libraries, i.e., the increase in number of publications of about ~3% year over year is included in this figure. In other words, if one divides the yearly subscription fees by the number of articles published in that year, one arrives at a number of about US$4-5k per article.

Remarkably, and this is crucially important here, this US$4-5k number has remained fairly constant since the 1990s!

The per-article APCs, in contrast, are not constant, they increase. The number of articles we publish also increases, by about 3% every year. This means that in an APC-OA world, total spending on publishing seems likely to increase exponentially, as both the number of articles increases and the price for each article article.

In other words, to really compare both above curves, one needs to run a small model. The output of the model would be expected total costs to the tax-payer in a subscription world vs. expected costs in an APC-OA world. The assumptions of the model would be extrapolated growth curves under a subscription only scenario and an APC-OA only scenario, with price increases from the past extrapolated to the future, plus an underlying ~3% growth in papers per year (need to check exact value). Subscription article prices would remain fixed at, say, US$4,500 and the starting APC could be that last value in the APC graph above, say US$1,800. With these assumptions, it seems to me that it should just be a matter of time until the total price of publishing APC-OA overtakes subscription pricing. How soon would that be?

Who could help me plot such a comparison?

P.S.: Just to pre-empt – in the 20 years covered by the subscription graph, per-article prices ought to have dropped dramatically, so there is nothing positive in these prices staying constant.

Like this:

Like Loading...
Posted on October 2, 2019 at 23:58 2 Comments
May31

Improved Plan S principles raise hope of more effective policies soon

In: science politics • Tags: infrastructure, Plan I, Plan S

Yesterday, cOAlition S published their updated principles and implementation guidelines for #PlanS, together with the rationale behind the update. This constitutes a very much welcome effort, as evidence of the increasing awareness among funders as to their potential leverage in infrastructure modernization, at a time when institutions have apparently abandoned their faculty completely.

These policies would have been a much-needed initiative about eight years ago, when there was still a problem of access to the scholarly literature, when Unpaywall didn’t exist and sci-hub had just launched. Today, we have so many ways to access our literature, that these policies seem more like beating a dead horse. From this perspective, Plan S targets the wrong actors (individuals rather than institutions) to achieve a minor goal (Open Access), when our infrastructure rewards unreliable research, costs ten times too much and lacks crucial functionalities. The three components of the scholarly infrastructure emergency (reliability, affordability and functionality; RAF) remain untouched by Plan S while a (today) minor problem receives more attention than it deserves. In a way, Plan S seems more like a band aid for a small cut on a hand while the large, malignant tumor remains untreated.

It is in the power of cOAlition S funders to set policies that would tackle the RAF tumor/emergency and help solve the minor access problem as an added benefit: require institutions to provide modern scholarly infrastructure before any research funds can flow. Given the overall aim and language of Plan S, one can remain optimistic that cOAlition S will deliver such more adequately targeted, modern and hence effective policies in the near future. Perhaps we can read about a “Plan I” (for infrastructure) in the near future?

Like this:

Like Loading...
Posted on May 31, 2019 at 10:55 Comments Off on Improved Plan S principles raise hope of more effective policies soon
May22

Unpersuadables: When scientists dismiss science for political reasons

In: science politics • Tags: evidence-resistance, questionable research practices, unpersuadables

Scientists are used to vested interests disputing scientific claims. Tobacco corporations have tried to discredit the science about lung cancer and smoking, creationists keep raising always the same, long-debunked objections against evolution, climate-deniers claim the earth is cooling, anti-vaxxers believe the MMR vaccine causes autism and homeopaths delude themselves that a little drop of nothing has magical healing powers. No amount of evidence will convince such “unpersuadables”.

What receives less attention, though, is what may be a more severe problem for science, namely the dismissal of science by scientists – unpersuadable scientists.

Documenting this phenomenon is difficult, because the dismissal of science is only rarely uttered in writing or in public. One would tend to hope that this is an indication that such behavior may be rare. I am aware of only two instances. One is the now infamous blog post by Brian Wansink, and another are the public statements by decision-makers at a German conference documented elsewhere. Recently, I have witnessed a third instance.

At a dinner table with about ten participants, all academics, the discussion entered the topic of ‘quality’ academic journals and journal rank. When I referenced the data showing that higher journal rank tends to be associated with lower experimental reliability, several individuals mentioned that they find these results hard to believe. When I asked about data to the contrary which may be the basis for their hesitation, the participants only emphasize they had no other data, just their “intuition” and “gut feeling”. When I asked what they do when their own experiments yield data that go against their intuition or gut feeling, one professor exclaimed: “I tell the postdoc to do the experiment again and a third time if need be!”. When I expressed my shock at such practices, the two most senior professors, one of whom once a university president and both medical faculty, emphatically accused me of being dogmatic for giving primacy to scientific evidence, rather than intuitions or gut feelings.

Recent evidence points towards published scientific results, at least in some fields, being far less reliable than one would expect. If it were common that the reliability of science hinged on postdocs standing up for their data against the gut feeling of the person who will write their letters of recommendation and/or extend their contract, we may have to brace ourselves for more bad news coming from the reproducibility projects being carried out right now.

Wansink was trained in marketing and had no idea about science. His lack of training and incompetence in science may be an excuse for his behavior. These two individuals, however, have graduated from medical school, have decades of research and teaching experience behind their belt and one of them even complained that “most of the authors of the manuscripts I review or edit have no clue about statistics”. Surely, these individuals recognize questionable research practices when they see them? Nevertheless, similar to Wansink, they wouldn’t take a “failed” experiment for an answer and similar to the decision makers at the meeting in Germany in 2015, they would put their experience before peer-reviewed evidence.

If scientific training doesn’t immunize individuals against unscientific thinking and questionable research practices, how can we select a future cadre of decision-makers in science that do not put themselves before science and that will implement evidence-based policies, instead of power-brokering their way to new positions? There is a recent article on “intellectual humility” – maybe this is the way to go?

P.S.: There are more instances of scientists publicly dismissing evidence to the contrary of their own belief: Zimbardo, Bargh spring to mind and I’ll look for others.

Like this:

Like Loading...
Posted on May 22, 2019 at 15:20 3 Comments
May02

New OSTP director caught lying in interview

In: science politics • Tags: Droegemeier, OSTP, politics

With about 2 years delay, US President Donald Trump hired Kelvin Droegemeier as director for the White House Office of Science and Technology Policy (OSTP) in February 2019. In a recent interview Dr. Droegemeier made the following broad, categorical statement in response to a PLan S question:

One of the things this government will not do is to tell researchers where they have to publish their papers. That is absolutely up to the scholar who’s doing the publication. There’s just no question about that.

[…]

This goes back to the president’s fundamental philosophy of let’s not shackle people, let’s let them be free to choose where they publish. Let’s not put any administrative constraints on them.

As Hanlon’s razor requires to first rule out incompetence before attributing malice to any action, one needs to assess the competence of Dr. Droegemeier with regard to such sweeping policy statements. On his Wikipedia entry one can find numerous political appointments for various policy posts since 2004. It is thus fair to assume that Dr. Droegemeier has a fair bit of experience and expertise with regard to the long-standing policies of the US government. Moreover, previous director Holdren and several other scientists have spoken out in favor of Dr. Droegemeier’s appointment. Attributing any false statement to incompetence would thus seem unlikely, even if he was appointed by Trump.

It is thus straightforward to conclude that the above statement was a lie with the purpose of scoring political points, as his statements are very easy to verify. For instance, one can have a look at job advertisements of a federal agency, such as the NIH, top see if they contain any language that implies that the government may “tell researchers where they have to publish their papers”. Such statements are, of course, easy enough to find. Here are two examples:

In this ad for a postdoctoral position in the NIAID/NIH Translational Allergic Immunopathy Unit it lists under expected qualifications: “a track record of publication in peer-reviewed journals”. Clearly, you are not welcome at the NIH (“this government”) if you haven’t published in these particular venues.

This ad for a postdoctoral position at the NCI/NIH cancer center goes even further. It makes the explicit expectation that the candidate, once hired, continues to publish in such journals, as the candidate “is anticipated to complete this study with one or more high-impact publications in peer review journals.”

Clearly, one of the things this US government is doing is precisely telling researchers in no uncertain terms where they have to publish their papers if they want to be/stay hired by this government – in a clear contradiction to what Dr. Droegemeier has stated in the interview. Surely a man with such ample experience in science policy is aware of the policies of the US’s largest scientific agency?

Like this:

Like Loading...
Posted on May 2, 2019 at 12:31 Comments Off on New OSTP director caught lying in interview
Mar29

New England Journal of Medicine – and you thought Nature was expensive?

In: science politics • Tags: publishers

The New England journal of Medicine has come out strongly against Open Access. Apparently, this journal does not seem to value access to medical information very highly. This lack of valuation could be due to several reasons. For one, the NEJM is leading the medical publishing industry in retractions:

source

From this perspective, it would make sense to not endanger the public with highly unreliable medical information until it has been properly vetted.It could also be that the journal thinks that science (or medicine in this case) either goes their way or the highway. This interpretation would be supported by their 2016 editorial, in which they branded every scientist, who used scientific data that they had not collected themselves for their research as “research parasites“. Now there are eponymous awards.

Finally and perhaps equally likely (the reasons are, of course, not mutually exclusive) it could simply be about money. In their public 2016 tax return (does using their numbers here make me a research parasite, too?), the NEJM lists the following items, all p.a. and in US$:

Publication revenue: 103,145,834
Revenue less expenses: 3,896,197
Salaries:

  1. Editor in Chief: 702,324
  2. Vice president, publishing: 472,499
  3. Vice president/General counsel: 417,405
  4. Executive editor: 393,059
  5. Former executive vice president: 383,007
  6. Executive vice president: 380,480
  7. Executive director, global sales: 368,254
  8. Executive deputy editor: 328,461
  9. Deputy editor: 321,468
  10. Vice president, Finance: 321,053

Salary sum: 4,088,010

These numbers have to be put into perspective within the wider publishing ecosystem. In contrast to this blog post, publishing of research articles costs serious money. To make a peer-reviewed manuscript public, total costs of several hundred US$/€ accrue. These costs are fairly similar across publishers, see, e.g., this overview. Other publishers confirm these numbers:

According to Web Of Science, the NEJM (2016 Impact Factor: 72.406) has published 328 citable items in 2016. Of course, they have published many more articles that can in principle be cited (1,304 of those, to be exact), but while all their citations are counted in the numerator of the impact factor (IF) calculation, these 1,304 “missing” articles are not counted in the denominator. This is common practice and each journal negotiates what gets counted with Clarivate Analytics, the company that publishes the IF. It is reasonable to assume that only citable items contain valuable scientific or medical research for which the public would have to pay to have them published in an Open Access scenario, while the other articles are opinion or news articles, either commissioned or written by in-house staff.

Following the reasoning put forth by publishers such as SpringerNature, that Open Access publication charges should provide publishers with the same publication revenue as the current subscriptions, in 2016 the public would have had to spend approx. US$103,145,834 for the 328 research-type articles NEJM publishes annually.

This would amount to an article processing charge (APC) for NEJM of around US$314,000.

Or, phrased differently, the current business model of NEJM entails the tax-payer paying more than US$300k for each research article in NEJM, which, at the same time:

  • pays their management staff the 3-7 fold income of one of their professor-authors
  • for each research-type article, cross-subsidizes about four other news-type or opinion articles, some of which insult scientists
  • pays for the rejection costs of 95% of all submitted articles
  • overpays the actual publishing costs by about 1,200-fold

In comparison, the 30k€ estimated to be paid to Nature branded journals look like a downright bargain. When one compares cost per IF point, one can see that for, e.g., Nature itself, an author would pay about 1,000-2,000€ per impact point, while they would have to pay more than $4,000 per impact point for an NEJM article.

While both journals are leading the industry in terms of unreliable science, one could also, from an author’s perspective, conclude that shoddy science may be less likely to be detected at Nature, definitely another plus if one wants to get published but doesn’t want to get caught.

But then again, at many medical departments and schools, what counts isn’t efficiency nor reliability, but raw IF power. In that case, there would be little other choice than to add 300k to the next grant application, in case NEJM ever went APC-OA, despite this recent editorial.

P.S.: Alternatively, NEJM authors could just get a Rolls Royce instead.

Like this:

Like Loading...
Posted on March 29, 2019 at 13:41 2 Comments
Mar01

How publishers keep fooling academics

In: science politics • Tags: costs, price, publishers

Time and time again, academic publishers have managed to create the impression that publishing incurs a lot of costs which justify the outrageous prices they charge, be that US$11M p.a. for an Elsevier Big Deal subscription or an article processing charge (APC) of US$5,200 for a Nature Communications article.
This week, again, an academic publisher, SpringerNature, reaffirmed its readers that they have huge costs that necessitate the price they charge. This time, the publisher repeated their testimony from 2004 that “they have high internal costs” that amount to €10,000-30,000 per published article:

Springer Nature estimates that it costs, on average, €10,000–30,000 to publish an article in one of its Nature-branded journals

However, in their 2004 testimony, where they state figures in a similar range, they explain how they arrive at these numbers:

The £30,000 figure was arrived at simply by dividing the annual income of Nature (£30 million) by the number of research papers published (1,000).

This means that what the publishers are referring to isn’t their costs for publishing at all, it is the price that they charge the public for all of their services.

It is well established that the cost of making an article public with all the bells and whistles that come with an academic article is between US$/€200-500. This is the item one would reasonably call “publication costs”. Because they are so low, this item cannot be the main reason for the price of a typical Nature branded article. SpringerNature performs additional services, some of which are somewhat related to the publication process, other not so much.

For instance, journals such as Nature or Science reject about 92% of all submitted articles. Someone needs to read and reject all of these articles. Such “selectivity” is explicitly mentioned as a reason for the high prices.
It is important to keep in mind that this expensive selectivity fails to accomplish any increase in quality and is thus completely ineffective. The entire practice is thus very reminiscent of how potatoes were introduced in France. Nevertheless, the salaries of the employees who reject all these manuscripts are a cost item, effective or not. As it only concerns all the articles not published, it sounds rather absurd to lump this item in with “publication costs”, even though it is sort of related (Non-publication costs? Rejection costs?).

Another cost item is the paywall used to prevent non-subscribers from accessing the articles. Such paywalls can be very expensive, as, for instance, the New York Times is reported to have spent anywhere between US$25-55M for their paywall. Running a paywall to prevent unauthorized users from reading the articles is another cost item. This one, I would argue, is even less related to publishing.

Finally, there are cost items that are completely and rather uncontroversially unrelated to publishing, such as salaries for management, executives or government relations, as well as other costs such as journalism and news services, the latter explicitly mentioned in the recent article.

All of these cost items together make up the ~€10,000-30,000 that are currently being paid for an article in the SpringerNature stable and there are no reasons to doubt this price tag. Importantly, peer-review is not a cost item as the reviewers are paid via their academic salaries and not by the publisher. Authors are also not paid by the publisher, so this is also not a cost item. The person organizing the peer-review is usually one of the persons rejecting all those other manuscripts, so 92% of their salary are already covered by the rejection costs.

If the scholarly community accepts this price as reasonable, it needs to be prepared to explain to the tax-payer, why it is justified to use public funds to pay a private company such as SpringerNature less than ~€500 to publish a scholarly article in one of their journals and then an additional ~€29,500 for cost items such as ineffectively rejecting articles, making sure the ‘published’ articles remain difficult to access for most ordinary tax payers and the salaries of the company’s executives and lobbyists.

Like this:

Like Loading...
Posted on March 1, 2019 at 23:15 4 Comments
Jan17

Providing recommendations for Plan S implementation

In: science politics • Tags: cOAlition S, infrastructure, Plan I, Plan S

Since cOAlition S is asking for recommendations from the community for the implementation of their Plan S, I have also chipped in. In their feedback form, they ask two questions, to which I have answered with the replies below. With more than 700 such recommendations already posted, I am not deluding myself that anybody is going to read mine, so, for the record, here are my answers (with links added that I haven’t added in the form):

Is there anything unclear or are there any issues that have not been addressed by the guidance document?

The document is very clear and I support the principles behind it. The only major issue left unaddressed is the real threat of universal APC-based OA as a potential outcome. This unintended consequence is particularly pernicious, because it would merely change the accessibility of the literature (which currently is not even a major issue, hence the many Big Deal cancellations world-wide), leaving all other factors untouched. A consequence of universal APC-OA is that monetary inequity would be added to a scholarly infrastructure that is already rife with replication issues, other inequities and a dearth of digital functionalities. Moreover, the available evidence suggests that authors’ publishing strategy takes prestige and other factors more into account than cost, explaining the observation of already rising APCs. A price cap is de facto unenforceable, as authors pay any price above the cap, if they deem the cost worth the benefit. Here in Germany, it has become routine in the last decade, to pay any APC above the 2000€ cap imposed by the DFG from other sources. Hence, APCs have risen also in Germany unimpeded in the last ten years. A switch away from journal-based evaluations as intended by DORA also would lead to a change in authors’ publication strategy only after hardly any evaluations were conducted by journal rank any more, a time point decades in the future, given the current ubiquitous use of journal rank, despite decades of arguing against the practice. Thus, the currently available evidence suggests that a switch to universal APC-based OA, all else remaining equal, would likely lead to the unintended consequence of massively deteriorating the current status quo, in particular at the expense of the most vulnerable scholars and to the benefit of the already successful players. Therefore, rather than pushing access to only the literature (not a major problem any more) at all costs, universal APC-based OA needs to be avoided at all costs.

A minor issue is that Plan S does not address any other research output other than text-based narratives. Why is, e.g., research data only mentioned in passing and code/software even explicitly referred to with “external” repositories? Data and code are not second-class research objects.

Are there other mechanisms or requirements funders should consider to foster full and immediate Open Access of research outputs?

Individual mandates prior to Plan S (e.g., Liège, NIH, etc.) have proven to be effective. Especially when leveraged across large numbers of researchers they can have a noticeable impact on the accessibility of research publications. Widespread adoption of these policy instruments is also a clear sign of a broader consensus about what good, modern scholarship entails. However, so far, these mandates have not only failed to cover research outputs other than scholarly publications, some of them have also proven difficult to enforce or contained incentives for APC-based OA (see above). A small change to routine proceedings at most funding agencies today could provide a solution to these problems, prevent unintended consequences and complement Plan S. In support of Plan S, this small change has been called “Plan I” (for infrastructure). The routine proceedings, carried out by most funding agencies today, that would need amending or expanding are the infrastructure requirements the agencies place on the recipient institutions. Specific infrastructure requirements often are in place and enforced for, e.g., applications concerning particular (mostly expensive) equipment. General infrastructure requirements (e.g., data repositories, long-time archiving, etc.) are often in place for all grant applications, but more rarely enforced. Finally, most funding agencies already only consider applications from accredited institutions, which have passed some basic level of infrastructure scrutiny. The amendment or expansion that would have to take place merely expands on the enforcement of the infrastructure requirements to all applications and would need to be specific with regard to the type of infrastructure required for all research outputs, i.e., narratives (often text), data and code/software. Thus, Plan I entails to require institutions to provide grant recipients with the infrastructure for their grant recipients to be able to provide full and immediate Open Access of all of their research outputs (and hence comply with the Plan S principles and not just the implementation).

Here only an abbreviated list of Plan I advantages:

  • (publisher) services become substitutable
  • permanently low costs due to actual competition
  • no author facing charges
  • desired journal functionalities can be copied
  • if subscription funds are used for implementation, the demise of journals will accelerate journal-independent evaluations
  • cost-neutral solutions for data/code
  • no individual mandates that may violate sense of academic freedom required
  • technically easy implementation of modern digital properties to all research objects
  • modern sort, filter and discovery tools replace 17th century editorial/journal system
  • implementation of social technology that serves the scholarly community
  • sustainable long-term archiving that becomes catastrophe-proof with distributed technology
  • permanent, legal, public access to all research objects, with licensing under the control of the scholarly community.

Like this:

Like Loading...
Posted on January 17, 2019 at 09:36 Comments Off on Providing recommendations for Plan S implementation
Nov28

Maybe try another kind of mandate?

In: science politics • Tags: funders, infrastructure, mandates, open access, open science

Over the last ten years, scientific funding agencies across the globe have implemented policies which force their grant recipients to behave in a compliant way. For instance, the NIH OA policy mandates that research articles describing research they funded must be available via PubMedCentral within 12 months of publication. Other funders and also some institutions have implemented various policies with similar mandates.

In principle, such mandates are great not only because they demonstrate the intention of the mandating organization to put the interest of the public over the interest of authors and publishers. They also can be quite effective, to some extent, as the NIH mandate or the one from the University of Liège.

At the same time, such individual mandates are suboptimal for a variety of reasons, e.g.:

  1. In general, mandates are evidence that the system is not working as intended. After all, mandates intend to force people to behave in a way they otherwise would not behave. Mandates are thus no more than stop-gap measures for a badly designed system, instead of measures designed to eliminate the underlying systemic reasons for the undesired behavior.
  2. Funder mandates also seem to be designed to counter-act unintended consequences of competitive grant awards: competitive behavior. To be awarded research grants, what counts are publications, both many and in the right journals. So researchers will make sure no competitor gets any inside information too early and will try to close off as much of their research for as long as possible, including text, data and code. Mandates are designed to counter-act this competitive behavior, which means that on the one hand, funders incentivize one behavior and on the other punish it with a mandate. This is not what one would call clever design.
  3. Depending on the range of the behaviors intended to control, mandates are also notoriously difficult and tedious to monitor and enforce. For instance, if the mandate concerns depositing a copy of a publication in a repository, manual checks would have to be performed for each grant recipient. This is the reason the NIH have introduced automatic deposition in PMC. If re-use licenses are mandated, they also need to be tested for compliance. If only certain types of journals qualify for compliance, the 30k journals need to be vetted – or at least those where grant recipients have published. Caps on article processing charges (APCs) are essentially impossible to enforce, as no funder has jurisdiction over what private companies can ask for their products, nor the possibility to legally monitor the bank accounts of grant recipients for possible payments above mandated spending caps. Here in Germany, our funder, the DFG has had an APC cap in place for more than 10 years now and grant recipients simply pay any amount exceeding the cap from other sources.
  4. In countries such as Germany, where academic freedom is written into the constitution, such individual mandates are considered an infringement on this basic right. There currently is a law suit in Germany, brought by several law professors against their university for mandating a deposit of a copy of all articles in the university’s repository. In such countries, the mandate solution is highly likely to fail.
  5. Mandates, as the name implies, is a form of coercion to force people to behave in ways they would not otherwise behave. Besides the bureaucratic efforts needed to monitor and enforce compliance, mandates are bound to be met with resistance by those coerced by the mandate to perform additional work that takes time away from work seen as more pressing or important. There may thus be resistance both to the implementation and the enforcement of mandates that appear to be too coercive, reducing the effectiveness of the mandates.

For about the same time as the individual mandates, if not for longer, funders have also provided guidelines for the kind of infrastructure the institutions should provide grant recipients with. In contrast to individual mandates, these guidelines have not been enforced at all. For instance, the DFG endorses the European Charter for Access to Research Infrastructures and suggests (in more than just one document) that institutions provide DFG grant recipients with research infrastructure that includes, e.g., data repositories for access and long-term archiving. To my knowledge, such repositories are far from standard at German institutions. In addition, the DFG is part of an ongoing, nation-wide initiative to strengthen digital infrastructures for text, data and code. As an example, within this initiative, we have created guidelines for how research institutions should support the creation and use of scientific code and software. However, to this day, there is no mechanism in place to certify compliance of the funded institutions with these documents.

In the light of these aspects, would it not be wise to enforce these guidelines to an extent that using these research infrastructures would save researchers effort and make them compliant with the individual mandates at the same time? In other words, could the funders not save a lot of time and energy by enforcing institutions to provide research infrastructure that enables their grant recipients to effortlessly become compliant with individual mandates? In fact, such institutional ‘mandates’ would make the desired behavior also the most time and effort saving behavior, perhaps making individual mandates redundant?

Instead of monitoring individual grant recipients or journals or articles, funders would only have to implement, e.g., a certification procedure. Only applications from certified institutions would qualify for research grants. Such strict requirements are rather commonplace as, e.g., in many countries only accredited institutions qualify. Moreover, on top of such general requirements, there can be very specific infrastructure requirements for certain projects, such as a core facility for certain high-throughput experiments. In this case, the specifications can even extend to certain research and technical staff and whether or not the core facility needs permanent staffing or temporary positions. Thus, it seems, such a certification procedure would be a rather small step for funders already set up to monitor institutions for their infrastructure capabilities.

If groups of funders, such as cOAlition S, coordinated their technical requirements as they have been coordinating their individual mandates, the resulting infrastructure requirements would include FAIR principles, which would lead to a decentralized, interoperable infrastructure. under the governance of the scientific community. As this infrastructure is intended to replace current subscription publishing with a platform that integrates our text-based narratives with our data and code, it would be straightforward for the funders to suggest that an obvious source of funds for the required infrastructure would be subscriptions. As most scholarly articles are available without subscriptions anyway and implementing the infrastructure is much cheaper, on average, than subscriptions, the implementation should be possible without disruption and with considerable cost reductions for the institutions. If an institution considers their library to be the traditional place where the output of scholars is curated, made accessible and archived, then there would not even have to be a redirection of funds from library subscriptions to different infrastructure units – the money would stay within the libraries. But of course, institutions would in principle remain free to source the funds any way they see fit.

Libraries themselves would not only see a massive upgrade as they would now be one of the most central infrastructure units within each institute, they would also rid themselves of the loathsome negotiations with the parasitic publishers, a task, librarians tell me, which no librarian loves. Through their media expertise and their experience with direct user contact libraries would also be ideally placed to handle the implementation of the infrastructure and training users.

Faculty would then enjoy never to have to worry about their data or their code ever again, as their institutions would now have an infrastructure that automatically takes care of these outputs. Inasmuch as institutions were to cancel subscriptions, there also would be no free/paid alternative to publish than the infrastructure provided by the institutions, as the cash-strapped publishers would have to close down their journals. Moreover, the integration of authoring systems with scientific data and code makes drafting manuscripts much easier and publication/submission is just a single click, such that any faculty who values their time will use this system simply because it is superior to the antiquated way we publish today. Faculty as readers will also use this system as it comes with a modern, customizable sort, filter and discovery system, vastly surpassing any filtering the ancient journals could ever accomplish.

Taken together, such a certification process would only be a small step for funders already inclined to push harder to make the research they funded accessible, save institutions a lot of money every year, be welcomed by libraries and a time saver for faculty, who would not have to be forced to use this conveniently invisible infrastructure.

Open standards underlying the infrastructure ensure a lively market of service providers, as the standards make the services truly substitutable: if an institution is not satisfied with the service of company A, it can choose company B for the next contract, ensuring sufficient competition to keep prices down permanently. For this reason, objections to such a certification process can only come from one group of stakeholders: the legacy publishers who, faced with actual competition, will not be able to enjoy their huge profit margins any longer, while all other stakeholders enjoy their much improved situation all around.

 

Like this:

Like Loading...
Posted on November 28, 2018 at 11:00 4 Comments
  • Page 6 of 22
  • « First
  • «
  • 4
  • 5
  • 6
  • 7
  • 8
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,085 views)
  • Sci-Hub as necessary, effective civil disobedience (23,039 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,920 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

The Drosophila Flight Simulator 2.0
The Drosophila Flight Simulator 2.0

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d