bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 77 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 374 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 651 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1538 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1484 downloads 0.00 KB
Download
Sep25

Why do academic institutions seem stuck in 1995?

In: science politics • Tags: competition, infrastructure, neoliberalism

Until the late 1980s or early 1990s, academic institutions such as universities and research institutes were at the forefront of developing and implementing digital technology. After email they developed Gopher, TCP/IP, http, the NCSA Mosaic browser and experimented with Mbone.

Since then, at most academic institutions, infrastructure has moved past the support of email and browsers only at a glacial pace. Compared to the years and decades before the early 1990s, the last 30 years appear to be frozen in time, with virtually no modernization of our infrastructure beyond bandwidth.

Functionalities we take for granted outside of academia, such as automated data sharing, collaborative code development and authoring, social media, etc. – virtually none of it is supported by academic institutions on an analogous, broad international scale such as email or browsing.

As the technology is commercially available and more than enough money is still flowing into obsolete infrastructure such as journal subscriptions, the conclusion that it must be a social obstacle that prevents infrastructure modernization becomes inescapable.

I was asked on Twitter today, what this social obstacle might be and how it could be overcome:

Here is a short summary of my answers to these two questions:

The first is a difficult question and I only can offer hypotheses. A recent comment by Antonio Loprieno when we were on a panel of the German Wissenschaftsrat, seemed to confirm part of one hypothesis: he said, citing a recent example from a university in Germany, that today institutions seem to be more willing to invest in looking like they are performing better, rather than to actually perform better – show over substance. His example was that of a university hiring two FTEs to massage the data sent to ranking organizations, rather than to change the way the university is operating.

From this perspective, until the early 1990s, around which time universities were told to compete, universities cooperated and had the interest of their faculty and students in mind. This entailed that if there was a technology that would stand to improve how the university could fulfill its mission and money was available to implement it, it would be implemented. The internet was such a technology and according to a book entitled “How not to network a nation“, institutions had to cooperate to make this technology a reality. Apparently, academic institutions saw the potential, found the money and cooperated, or the efforts would have failed just as those in the Soviet Union at the time failed.

https://mitpress.mit.edu/sites/default/files/styles/large_book_cover/http/mitp-content-server.mit.edu%3A18180/books/covers/cover/%3Fcollid%3Dbooks_covers_0%26isbn%3D9780262034180%26type%3D.jpg?itok=VxeTgicM
https://mitpress.mit.edu/books/how-not-network-nation

Analogous to the failure of Soviet Russia to use competition between institutions to develop their own network, roughly since the fall of the iron curtain (ironically enough) we have been mimicking the failed Soviet approach with competing institutions focusing more on rankings rather than actually improving the way they pursue their mission.

As for the second question, of how to overcome this social obstacle, this is easier to answer and there are a number of options to choose from.

  1. Obviously, one could revert some of the fatal decisions made in the early 1990s (like getting rid of New Public Management and other similarly embarrassing inventions).
  2. However, #1 seems like the most difficult of options, so almost a decade ago now, I thought one only needed to convince libraries that they are ideally suited to implement the new infrastructure as they have both the expertise and, of course they control the money. After nearly a decade of interacting with both skeptical and enthusiastic libraries, I can now see why even the enthusiastic ones are hesitant: they have similarly good reasons not to act as my researcher colleagues.
  3. So when I saw how deans of several departments here at my institution were scrambling to find positions and money for infrastructure requirements of the DFG, I had the idea for Plan I (for infrastructure): these infrastructure mandates that are already in place for some aspects of funding, need to be expanded for all aspects. Institutions dependent on research overhead will do anything to meet the demands of the funders, as I could experience first hand. I’ve talked to funders like the DFG, NSF, NIH and ERC (Bourguignon) and none of them saw any obstacles, but I have yet to see adoption or even widespread discussion.

So given that last experience and those of the last 13 years or so in which I have been involved in these topics, I expect funders to soon come up with a similar reason as my researcher colleagues and libraries as to why they actually do support infrastructure modernization, but, unfortunately, can’t do anything about it other than to keep hammering down on the least powerful (e.g., with individual mandates) and let the main obstacles to modernization remain in place.

Like this:

Like Loading...
Posted on September 25, 2020 at 11:59 5 Comments
Sep03

Who’s responsible for the lack of action?

In: science politics • Tags: change, infrastructure, responsibility

There are regular discussions among academics as to who should be the prime mover in infrastructure reform. Some point to the publishers to finally change their business model. Others claim that researchers need to vote with their feet and change how they publish. Again others find that libraries should just stop subscribing to journals and use the saved money for a modern publishing system. Finally and most recently, people have been urging funding agencies to use their power to attach strings to their grant funds and force change where none has occurred.

I was recently interviewed by the Wissenschaftsrat, a government-appointed science policy forum, and one of their questions was also:

How can the lock-in-effect of prestigious titles be avoided or mitigated and who do you see as responsible for initiating such changes?

I replied:

We, the scientific community and all institutions supporting them, are all responsible for change.

The more relevant question is: who is in the strategically best position to break the lock-in-effect and initiate change?

  • Researchers decide if they evaluate colleagues on glamour proxies that deteriorate the reliability of science by valuing “novelty” above all else, or if they stand up and demand an infrastructure from their institutions that supports reliability, saves time and provides for an optimized workflow in which they can focus on science again, instead of being constantly side-tracked by the technical minutiae of reviews, meetings, submissions, etc.
  • Libraries decide how to spend their ~10b€ annually: on subscriptions/APCs in opaque and unaccountable negotiations, exempt from spending rules or on a modern infrastructure without antiquated journals and with a thriving, innovative market that allows them to choose among the lowest responsible bidders?
  • Funders decide whether to support scientists at institutions that fund monopolists and reward unreliable science, or those that work at institutions which spend their infrastructure and research funds in a fiscally responsible way to provide an infrastructure that preserves not only text, but data and code as well, ensuring the reliability and veracity of the results.

Right now, it seems only few realize their responsibility and even fewer are even considering their strategic position for change. Until now, many seem to think researchers need to change, but they can reasonably claim that they cannot risk their or their co-workers careers. For many years, some of us have tried to convince libraries to spend their funds more responsibly, but they can reasonably claim that neither can any library make such a change alone, nor can they divert their funds without faculty support. I have yet to hear a similarly convincing argument why funders need to coerce individual researchers rather than their institutions, but I am sure they as well will soon have an analogously reasonable claim as to why they also need to make things worse while intending to make things better.

The remark to funders, of course refers to current initiatives (such as PlanS and others) to force researchers to publish only in certain, compliant venues. This is, rather obviously, a suboptimal approach.

Of course, the road to hell is paved with good intentions and all players here are demonstrably well-intended. There is only one group of participants which are not well-intended and they don’t need to be: academic publishers.

Publishers, have absolutely no obligation or responsibility for change: their sole purpose, their fiduciary duty, even, in cases where the publishers are publicly traded, is to maximize profits in any legal way they see fit. Following market rules and capitalist logic, publishers today excel at avoiding competition, reducing their costs and increasing their revenue, year in year out, whether there is a global financial crisis or not. Paragons of capitalist work-ethics, the most profitable of them sport margins between 35-40% for at least the last decade or two. It is clearly not their fault if we academics create a perfect scenario for capitalist success at the expense of the public. Publishers only milk the academic cash-cow for all it is worth and we have proven to be such sheepish hosts, that the parasites do not even have to hide their disdain for us suckers any more.

Like this:

Like Loading...
Posted on September 3, 2020 at 15:27 1 Comment
Jul16

Tagging and knocking out FoxP with CRISPR/Cas9

In: own data, science • Tags: Buridan, Drosophila, FoxP, operant

The FoxP gene family comprises a set of transcription factors that gained fame because of their involvement in the acquisition of speech and language. While early hypotheses circulated about its function as a ‘learning gene’, a simultaneous “motor-hypothesis” stipulated that the gene may be more of a motor learning gene, involved in different kinds of motor learning, one of which is speech acquisition. Work in animals as diverse as mice and fruit flies over the last 20 years has firmly established at least some of the FoxP genes as crucial for motor learning tasks that are not involved in language.

Now, our graduate student Ottavia Palazzo (with invaluable support and training from Mathias Rass from the neighboring Schneuwly lab) has generated and thoroughly characterized a set of new transgenic fly lines to help us better understand the role of FoxP in the form of motor learning we are studying, operant self-learning.

Using CRISPR/Cas9 with homology-directed repair, she tagged the FoxP gene in two different ways. In one line, she tagged the gene such that we can express a fluorescent protein in all neurons where any of the three different isoforms is expressed, that this gene can give rise to. In the other line, she tagged only the one isoform that we think is associated with operant self-learning.

This one isoform is expressed throughout the adult brain of the fly, but not in the mushroom bodies, where a few previous reports had detected it, using a technique which can sometimes lead to incorrect expression patterns. In fact, because three previous studies reported three different expression patterns for the same gene, we chose this particular tagging strategy to avoid the problems associated with this technique. Ottavia found about 1200 neurons expressing the isoform we were interested in:

Drosophila brain iwth FoxP expression pattern
FoxP isoform B (green) and neuropil counter-staining (red)

Contrasting the expression of this isoform with the expression of the other two isoforms, revealed an additional ~600 neurons which express one or both of them (marked in blue):

Also here, not a hint of any expression in the mushroom bodies. Given the previous reports, we looked particularly closely not only in adults, but also in larvae, but could not find any expression. We also used an antibody (which we verified against mutants and our tagged lines to be highly specific) and found no expression in mushroom bodies. Our results with two different genomically tagged lines and the antibody corroborate earlier work with a differently tagged line and a reporter gene approach, which also failed to detect expression in the mushroom bodies. Given the multitude of different approaches all converging on identical expression patterns, it seems now clear that mushroom body Kenyon cells do not express FoxP above the detection threshold of these techniques. If the levels of expression that we detect are necessary for the physiological function of FoxP, it is conceivable that any expression below these thresholds may likely be physiologically irrelevant.

To see if there are any general problems with these fly lines (which may be problematic for the subsequent learning experiments), Ottavia tested the flies in Buridan’s paradigm. In case you haven’t heard about this experiment, here’s a short video I made 10 years ago:

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

Not unexpectedly, she found that the insertions she had made disrupted FoxP expression and had substantial effects on walking and landmark fixation:

Not only do the flies with homozygous insertions walk more slowly, they also fixate the stripes less and walk less straight (meander).

She also tested one of the original FoxP mutations, the widely used 3955 allele, and found similar defects:

Curiously, one of the studies that had (erroneously?) detected FoxP in the mushroom bodies, failed to detect this rather conspicuous (~20% or more) walking defect in these mutant flies, despite testing for it. At the time, I had already noticed that their control experiments lacked the sophistication to capture the motor defects I thought were most critical for their experiments, but apparently they were not even sensitive enough to detect such major defects. In summary, in this (Science!) paper, the authors detected FoxP where there apparently isn’t any, but failed to detect a severe motor phenotype, despite looking for it.

Ottavia also created a CRISPR line to knock FoxP out when and where she wanted. In one of her experiments, she knocked the gene out in early pupae or adult flies and found that this left walking behavior and landmark fixation of the flies unaffected. In other words, for these behaviors, FoxP is only important during development.

Similarly, knocking FoxP out of dorsal cluster neurons (important for stripe fixation and expressing FoxP) or mushroom body Kenyon cells (important for walking and stripe fixation, but not expressing FoxP), also had no effect:

On the other hand, when she excised the FoxP gene from motorneurons (e.g., with the D42 driver) or from neurons in the protocerebral bridge (with the cmpy driver), she saw almost the same effects as if she had deleted the gene constitutively:

The next step in this line of work will be to see which of these manipulations fly enough for the toque learning experiments and to see which neurons need FoxP for operant self-learning.

CRISPR/Cas9 was a new technique for our lab and it worked exceedingly well, both for tagging the gene and for knocking it out. In our hands, it worked with high efficiency, reliably and, as far as we can tell, with no off-target effects.

The results here also contradict some prominent publications in our field, so it will be interesting what, if anything, is going to happen to reconcile the findings.

Of course, as we try to practice open science, all the raw data for this work is publicly accessible with a liberal re-use license.

Like this:

Like Loading...
Posted on July 16, 2020 at 11:29 Comments Off on Tagging and knocking out FoxP with CRISPR/Cas9
Mar03

The ultimate Open Access timeline

In: science politics • Tags: fun, funders, open access, publishing, timeline
NIH, 1961: Journals are slow and cumbersome, why don’t we experiment with circulating preprints among peers to improve on the way we do science (Information Exchange Groups)?
Publishers, 1967: You have got to be kidding. Nobody cares about improving science, stop it, do it now.
Physicists, 1991: Hey, look, there is this cool online thing where circulate preprints for nearly free and everyone can read them (arxiv).
Libraries: Yay, we can pay for big subscription deals!
Publishers: Crickets (counting money)
Scholars, 1999: We can actually use that cheap online technology on a broader scale to ensure sound medical information for the world! (E-Biomed)
Publishers: No way José
Societies: But subscriptions are our money!
Scholars, 2002: Actually, this cool online thing is how we should be doing it not only in physics (BOAI).
Publishers: Let’s replace paywalls for reading with paywalls for publishing (BMC, PLoS)
Also publishers: Making money with bulk publishing is so gross! Let’s make money with bulk publishing ourselves (Megajournals, hand-me-down journals).
Libraries: Can we justify our existence by just paying for stuff?
NIH, 2005: Pretty please, if you have one of our grants, could you put a copy into our PMC?
Scholars: Huh?
Publishers, 2007: Open science is junk science (PRISM/Dezenhall)
NIH, 2008: If you take money from us, you have to make the papers open (OA mandate)
Publishers: But nobody can distinguish our copy from the one of the authors! We need to have exclusive money-making embargos on our papers or we lose our 36% profit margin!
NIH: Mkay. On top, we’ll make tax payers pay for the open part, too (PMC). Wouldn’t dream of risking your profit. Like that?
Publishers, 2011: Let’s use all that money we got from the libraries to pay politicians so they sponsor a bill that makes all this NIH ‘open’ BS illegal! (RWA)
Biologists, 2013: Hey, look, it only took us 52 years to recover from publishers shutting down our Information Exchange Groups and now we too can do what physicists have been doing for the last 22 years! (biorxiv)
Publishers: We can actually do the cheap publishing, too – with peer-review on top! (F1000Research, ScienceOpen)
Scholars: Does publishing there get me a job?
Libraries: Can we pay for cheap publishing, too?
Publishers, 2017: We can actually create a market where we all have to compete with our services such that prices stay down and the competition drives innovation! (ORC)
Libraries: In case we can’t pay for it any more, can you funders do that?
Funders: Oh, sure this is cool, we want to have those! (Wellcome Open Research, Gates Open Research, etc.)
Scholars: Open what?
States, countries et al.: This is really getting ridiculous. We really have to stop these rip-off subscriptions and show the publishers who’s their daddy. My way or the highway!

OK, we’ll pay them even more if only they make the papers finally open – next year or so. Fine, some supra-inflation price increases are only fair. And you know what? Surveillance capitalism is all the rage right now, it’s totally cool to hand over usage data from readers to publishers, ok? That’s how things are these days, get over it.
EU funders, 2018: If you take our money, you have to make your papers open, but no money-making embargo allowed this time! Also, no more hybrid double dipping! (Plan S)
Publishers: Hmm, surely nobody is going to notice if we just add an “X” to the title of our hybrid journal, pretend it’s now two journals and keep double dipping? (mirror journals)
Scholars: But by threat of burger-flipping we have no choice but to salami-slice our discoveries into tiny morsels that need to be sexed up beyond recognition so the Nature editors don’t see how incremental our work is. So because of this academic freedom we really won’t make our papers open.
Libraries: Should we pay for mirror journals?
Societies: Now you are really trying to kill societies! Don’t you love what we do? Isn’t our mission to the general public worth millions and millions of library money? We need to stop this silly ‘open’ trend from re-surfacing in the US and tell Trump to Make American Science Great Again (AAP letter). It worked for E-Biomed in ’99 and it’s going to work again.
Libraries: There has got to be a way for us to pay for something in there! Yes, here’s the DEAL: we just got some power back by finally being able (thanks sci-hub!) to cancel the 30-year-old Big Deal subscriptions, so with this new-found power let’s hand our cojones right back to the publishers on a silver platter by making Big Deal publishing subscriptions with them that no sci-hub can ever liberate us from!
No-deal scholars: OK, I can publish for free in a journal with long titles, or I can take a loan and publish in a journal that gives me tenure? Tough choice! Thanks for nothing, OA wackaloons!
Publishers, 2020: Yesss, my prrrrrecioussss – how about paying us some money just for not rejecting your paper right away? (EPCs)
Libraries: Can we pay those, too?

So this is essentially what happened instead of us sitting down and thinking how we could spend our money in the most technologically savvy way to the benefit of science, scholars and society. A generation later, roughly US$300 billion poorer and none the wiser, it seems.

For a serious timeline, or for looking up the references in this one, see the Open Access Directory.

Like this:

Like Loading...
Posted on March 3, 2020 at 15:04 1 Comment
Dec11

Elsevier now officially a “predatory” publisher

In: science politics • Tags: Elsevier, predatory publishing, publishing

For a number of years now, publishers who expect losing revenue in a transition to Open Access have been spreading fear about journals which claim to perform peer-review on submitted manuscripts, but then collect the publishing fee of a few hundred dollars (about 5-10% of what these legacy publishers charge) without performing any peer-review at all. Identifying such journals, however, in order to study if they have any actual detrimental effect on scholarship beyond the claims of these publishers with their commercial interests has proven difficult, as clearly defining the properties such so-called “predatory” publishers is problematic. Today, a new, consensus definition of a “predatory” publisher or journal was published:

Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices

Given that such a definition has proven so difficult over the years, let’s go through each point and see if a legacy publisher such as, say, Elsevier fits that definition:

1. entities that prioritize self-interest at the expense of scholarship

Elsevier consistently prioritizes mega-profits over scholarship. Too many examples to list, would need new server, so here is some more.

Check

2. false or misleading information

Elsevier published nine fake journals. And, of course, Dezenhall/PRISM and many other FUD campaigns, past and ongoing. Extensive track record.

Check

3. deviation from best editorial and publication practices

Chaos, Solitons and Fractals? The recently sold journal “Homeopathy“? Ghostwriting? Publishing obvious fakes?

Check

4. lack of transparency

Widespread use of non-disclosure agreements in subscription contracts.

Check

5. aggressive and indiscriminate solicitation practices

Everybody who has received a “call for papers” outside their fields from an Elsevier journal raise their hands. Advertising extra products or database access to authors? Aggressive and misleading negotiation tactics?

Check

I’m glad the article with the “predatoy” definition came with a handy cartoon (originally by David Parkins, modified by me)

So as far as this exercise goes, at least one of the main legacy publishers fits the five criteria for being branded a “predatory” publisher. According to the authors of the definition, this is the first step to solving the problem of predatory publishing. Personally, I’d say that canceling all subscriptions and “transformative agreements” to prevent further funding of predatory publishers would be a reasonable next step, now that we know how to identify them.

Like this:

Like Loading...
Posted on December 11, 2019 at 18:51 4 Comments
Oct17

With CRISPRed FoxP and habit formation to #SfN19

In: own data • Tags: CRISPR, Drosophila, mushroom bodies, operant, poster, SfN

Tomorrow we travel to the annual meeting of the Society for Neuroscience and our diligent scientists have already printed their posters!

Ottavia Palazzo will present her work on genome editing the FoxP locus of Drosophila with anatomical and behavioral characterizations of the various transgenic lines she has created. Spoiler: we now know the expression pattern of Foxp generally and of isoform B specifically and show that some of the behavioral phenotypes associated with manipulating the gene do not show up when the manipulation happens in the adult stage, rather than through development. She will present on Monday afternoon at Z14.

Clicking on this thumbnail will bring you to the abstract and a download of the poster

Anders Eriksson will present the results of his screen for which class of mushroom body output neurons (MBONs) is involved in the regulation of habit formation in flies. Learning can take place very quickly, yet habits take a long time (or many repetitions) to form. The process of habit formation is slow, because the activity of another form of learning, operant world-learning takes place at the same time and inhibits habit formation via the mushroom bodies. Spoiler: so far it looks as if MBON-2, 15 and 17 regulate habit formation, but this is subject to confirmation in follow-up experiments. He will also present on Monday afternoon, but a few steps away from Ottavia, at Z20.

Clicking on this thumbnail will bring you to the abstract and a download of the poster

The QR codes on the posters should get you directly to the PDF download.

Like this:

Like Loading...
Posted on October 17, 2019 at 16:32 Comments Off on With CRISPRed FoxP and habit formation to #SfN19
Oct14

Scholarship has bigger fish to fry than access

In: science politics • Tags: infrastructure, mandates, publishing

Around the globe, there are initiatives and organizations devoted to bring “Open Access” to the world, i.e., the public availability of scholarly research works, free of charge. However, the current debate seems to largely miss the point that human readers (there is still a problem for content mining) are already enjoying such public access for the huge majority of scholarly works since about 2013, due to several technical developments providing temporary work-arounds for publisher paywalls.

For various reasons, people (that includes many long-time OA activists) still publicly claim we need open access, when all we need is a different kind of open access to the one we currently already enjoy. The core of the access problem itself has actually been solved for the last 6-7 years, but (likely) only temporarily.

Of course, this realization dramatically changes the whole issue. For the last 6-7 years, paying for subscriptions has ceased to be necessary for access. One sign of the changing times is the support that initiatives such as DEAL, Bibsam etc. have: two years without subscriptions to Elsevier and what do you hear out of, e.g., Germany? Crickets! Nothing! Of course, it would be silly to conclude that in these two years nobody in Germany has read any Elsevier articles. The reason for the silence and the continued support for DEAL is that we now can access anything we want without subscriptions. The old adage that “everybody who needs access has access”, wrong prior to 2012 because of subscriptions, is now finally true despite subscriptions! DEAL et al.’s strong negotiation position would not have been possible, or even thinkable prior to 2012 and the single reason is that subscriptions have been rendered redundant for access.

But not only has the access problem decreased in size and prevalence dramatically, other problems have since surfaced that loom much larger than access, even if there had been no such technical developments.

  1. The reliability of the scientific literature appears to be much lower than expected –  and what use is an unreliable literature, accessible to the public? In particular, the publish-or-perish culture centered around journal rank is set to reward unreliable science and punish meticulous scientists, contributing a major socioeconomic driver for what some already call a replication crisis.
  2. Moreover, with the advent of APC-OA, the problem of affordability has come to the fore also for scholars, when before it was largely a libraries’ problem. Publishing costs of under 500€ an article, but prices of more than twenty times that (e.g., Nature branded journals and others) scare the scholarly community: in the future, will only rich labs/individuals/institutions be able to afford publishing in the prestigious journals without which nobody can survive in academia? Given that subscription costs are largely opaque and subscriptions themselves no longer necessary, of course there is huge resistance to something that is bound to make things worse from not only the point of view of authors. Not surprisingly, people have a hard time understanding why such change is needed.
  3. Finally, while billions are being spent on subscriptions that nobody needs any more, hardly anything is spent on the kind of infrastructures that are crucial for our work: databases, code-sharing sites, etc. Scholarship lacks even the most basic functionalities for some of the most crucial fruits of our labor: text, data and code.

The main issues any modernization of the scholarly infrastructure today needs to address are thus comprised by the RAF crisis: Reliability, Affordability and Functionality. Approach them with a modern infrastructure solution and the kind of access to the scholarly literature we currently enjoy will be perpetuated as a side effect.

In Europe, the lack of functionalities has been acknowledged, in particular for research data and now, slowly, also for scientific code and software. In part, the European Open Science Cloud (EOSC) is intended to address these problems. However, what we need is not a piecemeal hodgepodge of stand-alone computational solutions (which is the direction EOSC appears to be headed right now), we need to have a seamless infrastructure where we can integrate data and code into our texts. And this is where scholarly publishing can’t be seen as a standalone problem any longer, but as an integral part of a large-scale infrastructure crisis facing text, data and code with a core focus on reliability, affordability and functionality.

Taking the above together, it becomes clear that one of the major obstacles towards infrastructure reform on the decision-maker side is probably that EOSC on the one hand and DEAL, PlanS and the other initiatives on the other are seen and act as if they were addressing separate problems.

With the realization that EOSC; Plan S, DEAL, etc. are actually working on different aspects of the same issue, the problem to be solved is no longer that scholars publish in toll-access journals, but that institutions haven’t come up with a more attractive alternative. If individuals are not to blame, than there is no reason to mandate them to do anything differently. Instead, institutions should be mandated to stop funding journals via subscriptions or APCs and instead invest the money into a modern, more cost-effective infrastructure for text, data and code. Obviously, in this specificity, this is nearly impossible to mandate in most countries. However, there is a mandate that comes very close. It has been dubbed “Plan I” (for infrastructure). In brief, it entails a three step procedure:

  1. Build on already available standards and guidelines to establish a certification process for a sustainable scholarly infrastructure
  2. Funders require institutional certification before reviewing grant applications
  3. Institutions use subscription funds to implement infrastructure for certification

Many or most funding agencies already have (largely unenforced) infrastructure requirements, so step one is halfway done already. Step two is just the required enforcement step and step three will come out of necessity as few public institutions will have the funds available to implement the certification quickly. If deadlines were short and funders would recommend using subscription/APC funds for the implementation, the funds could be shifted rapidly from legacy publishing to service providing.

In fact, this system is already working for some sub-disciplines, it just needs to be expanded. I was able to observe how effective it is at my own university: Before considering applications for next-generation genome sequencing machines needed by our biology and medicine departments, the DFG requires (this would be the equivalent to point 2 in the three points above) applicants to certify that they work at an institution with a so-called ‘core facility’ to handle the massive amounts of data generated by these machines. The DFG has a very detailed list of requirements for such facilities in terms of hardware and staffing (equivalent to point 1 in the three points above). There is now a high-level task force within the two departments to find/shift funds and staff (point 3 above) to create four permanent positions and implement the computational infrastructure even before a single line of an application is even written. This example shows that the three points outlined above are already happening around the world with many funding agencies and merely have to be expanded to cover all fields of scholarship. It was the overt activism that led to the sudden flow of funds and creation of positions (where there usually is a chronic shortage of both!!), that prompted the idea for Plan I. Institutions will move heaven and earth to keep research funds flowing. If funders say “jump!”, institutions ask “how high?”. In this case, institutions have both the expertise and the funds (both within their libraries) to quickly and painlessly implement these modern technologies – it should be in the self-interest of any funding agency to help them set the correct priorities.

Such funder requirements would tackle all three main infrastructure problems head on: they would promote the reliability of science by eliminating journals as the basis for journal rank which rewards unreliable science and punishes reliable science. They would approach the affordability problem by introducing open standards-based competition and substitutability to a largely monopoly-based market. In fact, the European Commission’s Directorate General for Competition has explicitly suggested such measures for initiatives such as EOSC and Plan S. Finally, it would bring many new functionalities to not only our text-based narratives, but also our audio and visual narratives as well as, most needed, provide stable and sustainable infrastructure for research data and code.

Oh, and of course, the text-based narratives, interactively combined with our data and code (e.g., via living figures), would be publicly accessible and machine readable for content mining, as an added side-benefit.

Like this:

Like Loading...
Posted on October 14, 2019 at 14:03 2 Comments
Oct02

Is Open Access headed for a cost explosion?

In: science politics • Tags: costs, open access, publishing

By now, it is public knowledge that subscription prices for scholarly journals have been rising beyond inflation for decades (i.e., the serials crisis):

serials crisis

A superficially very similar graph was recently published for APC price increases:

source: https://doi.org/10.18352/lq.10280

When not paying too much attention, both figures seem to indicate a linear increase in costs over time for both business models. However, the situation is more complicated than that. For one, the Y-axis on the subscription graph indicates percent increase per year, so this is not a linear scale when one plots the numbers in actual currency. Moreover, the subscription graph plots total cost to libraries, i.e., the increase in number of publications of about ~3% year over year is included in this figure. In other words, if one divides the yearly subscription fees by the number of articles published in that year, one arrives at a number of about US$4-5k per article.

Remarkably, and this is crucially important here, this US$4-5k number has remained fairly constant since the 1990s!

The per-article APCs, in contrast, are not constant, they increase. The number of articles we publish also increases, by about 3% every year. This means that in an APC-OA world, total spending on publishing seems likely to increase exponentially, as both the number of articles increases and the price for each article article.

In other words, to really compare both above curves, one needs to run a small model. The output of the model would be expected total costs to the tax-payer in a subscription world vs. expected costs in an APC-OA world. The assumptions of the model would be extrapolated growth curves under a subscription only scenario and an APC-OA only scenario, with price increases from the past extrapolated to the future, plus an underlying ~3% growth in papers per year (need to check exact value). Subscription article prices would remain fixed at, say, US$4,500 and the starting APC could be that last value in the APC graph above, say US$1,800. With these assumptions, it seems to me that it should just be a matter of time until the total price of publishing APC-OA overtakes subscription pricing. How soon would that be?

Who could help me plot such a comparison?

P.S.: Just to pre-empt – in the 20 years covered by the subscription graph, per-article prices ought to have dropped dramatically, so there is nothing positive in these prices staying constant.

Like this:

Like Loading...
Posted on October 2, 2019 at 23:58 2 Comments
May31

Improved Plan S principles raise hope of more effective policies soon

In: science politics • Tags: infrastructure, Plan I, Plan S

Yesterday, cOAlition S published their updated principles and implementation guidelines for #PlanS, together with the rationale behind the update. This constitutes a very much welcome effort, as evidence of the increasing awareness among funders as to their potential leverage in infrastructure modernization, at a time when institutions have apparently abandoned their faculty completely.

These policies would have been a much-needed initiative about eight years ago, when there was still a problem of access to the scholarly literature, when Unpaywall didn’t exist and sci-hub had just launched. Today, we have so many ways to access our literature, that these policies seem more like beating a dead horse. From this perspective, Plan S targets the wrong actors (individuals rather than institutions) to achieve a minor goal (Open Access), when our infrastructure rewards unreliable research, costs ten times too much and lacks crucial functionalities. The three components of the scholarly infrastructure emergency (reliability, affordability and functionality; RAF) remain untouched by Plan S while a (today) minor problem receives more attention than it deserves. In a way, Plan S seems more like a band aid for a small cut on a hand while the large, malignant tumor remains untreated.

It is in the power of cOAlition S funders to set policies that would tackle the RAF tumor/emergency and help solve the minor access problem as an added benefit: require institutions to provide modern scholarly infrastructure before any research funds can flow. Given the overall aim and language of Plan S, one can remain optimistic that cOAlition S will deliver such more adequately targeted, modern and hence effective policies in the near future. Perhaps we can read about a “Plan I” (for infrastructure) in the near future?

Like this:

Like Loading...
Posted on May 31, 2019 at 10:55 Comments Off on Improved Plan S principles raise hope of more effective policies soon
May22

Unpersuadables: When scientists dismiss science for political reasons

In: science politics • Tags: evidence-resistance, questionable research practices, unpersuadables

Scientists are used to vested interests disputing scientific claims. Tobacco corporations have tried to discredit the science about lung cancer and smoking, creationists keep raising always the same, long-debunked objections against evolution, climate-deniers claim the earth is cooling, anti-vaxxers believe the MMR vaccine causes autism and homeopaths delude themselves that a little drop of nothing has magical healing powers. No amount of evidence will convince such “unpersuadables”.

What receives less attention, though, is what may be a more severe problem for science, namely the dismissal of science by scientists – unpersuadable scientists.

Documenting this phenomenon is difficult, because the dismissal of science is only rarely uttered in writing or in public. One would tend to hope that this is an indication that such behavior may be rare. I am aware of only two instances. One is the now infamous blog post by Brian Wansink, and another are the public statements by decision-makers at a German conference documented elsewhere. Recently, I have witnessed a third instance.

At a dinner table with about ten participants, all academics, the discussion entered the topic of ‘quality’ academic journals and journal rank. When I referenced the data showing that higher journal rank tends to be associated with lower experimental reliability, several individuals mentioned that they find these results hard to believe. When I asked about data to the contrary which may be the basis for their hesitation, the participants only emphasize they had no other data, just their “intuition” and “gut feeling”. When I asked what they do when their own experiments yield data that go against their intuition or gut feeling, one professor exclaimed: “I tell the postdoc to do the experiment again and a third time if need be!”. When I expressed my shock at such practices, the two most senior professors, one of whom once a university president and both medical faculty, emphatically accused me of being dogmatic for giving primacy to scientific evidence, rather than intuitions or gut feelings.

Recent evidence points towards published scientific results, at least in some fields, being far less reliable than one would expect. If it were common that the reliability of science hinged on postdocs standing up for their data against the gut feeling of the person who will write their letters of recommendation and/or extend their contract, we may have to brace ourselves for more bad news coming from the reproducibility projects being carried out right now.

Wansink was trained in marketing and had no idea about science. His lack of training and incompetence in science may be an excuse for his behavior. These two individuals, however, have graduated from medical school, have decades of research and teaching experience behind their belt and one of them even complained that “most of the authors of the manuscripts I review or edit have no clue about statistics”. Surely, these individuals recognize questionable research practices when they see them? Nevertheless, similar to Wansink, they wouldn’t take a “failed” experiment for an answer and similar to the decision makers at the meeting in Germany in 2015, they would put their experience before peer-reviewed evidence.

If scientific training doesn’t immunize individuals against unscientific thinking and questionable research practices, how can we select a future cadre of decision-makers in science that do not put themselves before science and that will implement evidence-based policies, instead of power-brokering their way to new positions? There is a recent article on “intellectual humility” – maybe this is the way to go?

P.S.: There are more instances of scientists publicly dismissing evidence to the contrary of their own belief: Zimbardo, Bargh spring to mind and I’ll look for others.

Like this:

Like Loading...
Posted on May 22, 2019 at 15:20 3 Comments
  • Page 5 of 21
  • « First
  • «
  • 3
  • 4
  • 5
  • 6
  • 7
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,695 views)
  • Sci-Hub as necessary, effective civil disobedience (22,934 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,432 views)
  • Booming university administrations (12,900 views)
  • What should a modern scientific infrastructure look like? (11,430 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d