bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 235 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 92 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 198 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 506 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 758 downloads 0.00 KB
Download
Nov28

Maybe try another kind of mandate?

In: science politics • Tags: funders, infrastructure, mandates, open access, open science

Over the last ten years, scientific funding agencies across the globe have implemented policies which force their grant recipients to behave in a compliant way. For instance, the NIH OA policy mandates that research articles describing research they funded must be available via PubMedCentral within 12 months of publication. Other funders and also some institutions have implemented various policies with similar mandates.

In principle, such mandates are great not only because they demonstrate the intention of the mandating organization to put the interest of the public over the interest of authors and publishers. They also can be quite effective, to some extent, as the NIH mandate or the one from the University of Liège.

At the same time, such individual mandates are suboptimal for a variety of reasons, e.g.:

  1. In general, mandates are evidence that the system is not working as intended. After all, mandates intend to force people to behave in a way they otherwise would not behave. Mandates are thus no more than stop-gap measures for a badly designed system, instead of measures designed to eliminate the underlying systemic reasons for the undesired behavior.
  2. Funder mandates also seem to be designed to counter-act unintended consequences of competitive grant awards: competitive behavior. To be awarded research grants, what counts are publications, both many and in the right journals. So researchers will make sure no competitor gets any inside information too early and will try to close off as much of their research for as long as possible, including text, data and code. Mandates are designed to counter-act this competitive behavior, which means that on the one hand, funders incentivize one behavior and on the other punish it with a mandate. This is not what one would call clever design.
  3. Depending on the range of the behaviors intended to control, mandates are also notoriously difficult and tedious to monitor and enforce. For instance, if the mandate concerns depositing a copy of a publication in a repository, manual checks would have to be performed for each grant recipient. This is the reason the NIH have introduced automatic deposition in PMC. If re-use licenses are mandated, they also need to be tested for compliance. If only certain types of journals qualify for compliance, the 30k journals need to be vetted – or at least those where grant recipients have published. Caps on article processing charges (APCs) are essentially impossible to enforce, as no funder has jurisdiction over what private companies can ask for their products, nor the possibility to legally monitor the bank accounts of grant recipients for possible payments above mandated spending caps. Here in Germany, our funder, the DFG has had an APC cap in place for more than 10 years now and grant recipients simply pay any amount exceeding the cap from other sources.
  4. In countries such as Germany, where academic freedom is written into the constitution, such individual mandates are considered an infringement on this basic right. There currently is a law suit in Germany, brought by several law professors against their university for mandating a deposit of a copy of all articles in the university’s repository. In such countries, the mandate solution is highly likely to fail.
  5. Mandates, as the name implies, is a form of coercion to force people to behave in ways they would not otherwise behave. Besides the bureaucratic efforts needed to monitor and enforce compliance, mandates are bound to be met with resistance by those coerced by the mandate to perform additional work that takes time away from work seen as more pressing or important. There may thus be resistance both to the implementation and the enforcement of mandates that appear to be too coercive, reducing the effectiveness of the mandates.

For about the same time as the individual mandates, if not for longer, funders have also provided guidelines for the kind of infrastructure the institutions should provide grant recipients with. In contrast to individual mandates, these guidelines have not been enforced at all. For instance, the DFG endorses the European Charter for Access to Research Infrastructures and suggests (in more than just one document) that institutions provide DFG grant recipients with research infrastructure that includes, e.g., data repositories for access and long-term archiving. To my knowledge, such repositories are far from standard at German institutions. In addition, the DFG is part of an ongoing, nation-wide initiative to strengthen digital infrastructures for text, data and code. As an example, within this initiative, we have created guidelines for how research institutions should support the creation and use of scientific code and software. However, to this day, there is no mechanism in place to certify compliance of the funded institutions with these documents.

In the light of these aspects, would it not be wise to enforce these guidelines to an extent that using these research infrastructures would save researchers effort and make them compliant with the individual mandates at the same time? In other words, could the funders not save a lot of time and energy by enforcing institutions to provide research infrastructure that enables their grant recipients to effortlessly become compliant with individual mandates? In fact, such institutional ‘mandates’ would make the desired behavior also the most time and effort saving behavior, perhaps making individual mandates redundant?

Instead of monitoring individual grant recipients or journals or articles, funders would only have to implement, e.g., a certification procedure. Only applications from certified institutions would qualify for research grants. Such strict requirements are rather commonplace as, e.g., in many countries only accredited institutions qualify. Moreover, on top of such general requirements, there can be very specific infrastructure requirements for certain projects, such as a core facility for certain high-throughput experiments. In this case, the specifications can even extend to certain research and technical staff and whether or not the core facility needs permanent staffing or temporary positions. Thus, it seems, such a certification procedure would be a rather small step for funders already set up to monitor institutions for their infrastructure capabilities.

If groups of funders, such as cOAlition S, coordinated their technical requirements as they have been coordinating their individual mandates, the resulting infrastructure requirements would include FAIR principles, which would lead to a decentralized, interoperable infrastructure. under the governance of the scientific community. As this infrastructure is intended to replace current subscription publishing with a platform that integrates our text-based narratives with our data and code, it would be straightforward for the funders to suggest that an obvious source of funds for the required infrastructure would be subscriptions. As most scholarly articles are available without subscriptions anyway and implementing the infrastructure is much cheaper, on average, than subscriptions, the implementation should be possible without disruption and with considerable cost reductions for the institutions. If an institution considers their library to be the traditional place where the output of scholars is curated, made accessible and archived, then there would not even have to be a redirection of funds from library subscriptions to different infrastructure units – the money would stay within the libraries. But of course, institutions would in principle remain free to source the funds any way they see fit.

Libraries themselves would not only see a massive upgrade as they would now be one of the most central infrastructure units within each institute, they would also rid themselves of the loathsome negotiations with the parasitic publishers, a task, librarians tell me, which no librarian loves. Through their media expertise and their experience with direct user contact libraries would also be ideally placed to handle the implementation of the infrastructure and training users.

Faculty would then enjoy never to have to worry about their data or their code ever again, as their institutions would now have an infrastructure that automatically takes care of these outputs. Inasmuch as institutions were to cancel subscriptions, there also would be no free/paid alternative to publish than the infrastructure provided by the institutions, as the cash-strapped publishers would have to close down their journals. Moreover, the integration of authoring systems with scientific data and code makes drafting manuscripts much easier and publication/submission is just a single click, such that any faculty who values their time will use this system simply because it is superior to the antiquated way we publish today. Faculty as readers will also use this system as it comes with a modern, customizable sort, filter and discovery system, vastly surpassing any filtering the ancient journals could ever accomplish.

Taken together, such a certification process would only be a small step for funders already inclined to push harder to make the research they funded accessible, save institutions a lot of money every year, be welcomed by libraries and a time saver for faculty, who would not have to be forced to use this conveniently invisible infrastructure.

Open standards underlying the infrastructure ensure a lively market of service providers, as the standards make the services truly substitutable: if an institution is not satisfied with the service of company A, it can choose company B for the next contract, ensuring sufficient competition to keep prices down permanently. For this reason, objections to such a certification process can only come from one group of stakeholders: the legacy publishers who, faced with actual competition, will not be able to enjoy their huge profit margins any longer, while all other stakeholders enjoy their much improved situation all around.

 

Posted on November 28, 2018 at 11:00 4 Comments
Nov02

Dopamine in optogenetic self-stimulation and CRISPR editing of FoxP

In: own data • Tags: dopamine, FoxP, optogenetics, poster, SfN

This year we have two posters at the SfN meeting in sunny San Diego, Ca. The first poster is on Sunday morning, Nov. 4, poster number 152.09, board QQ7, entitled “Neurobiological mechanisms of spontaneous behavior and operant feedback in Drosophila“. For this poster, Christian Rohrsen used three different optogenetic self-stimulation experiments to find out which dopaminergic neurons mediate reward or punishment, respectively.

The second poster is on Monday afternoon, Nov 5, poster number 407.23, board UU1, entitled “CRISPR/Cas9-based genome editing of the FoxP locus in Drosophila“. For this poster, Ottavia Palazzo created several fly lines in which the FoxP gene locus was modified by, for instance, inserting a GAL4 reporter in place of important parts of the gene, creating loss-of-function alleles. Ottavia has created a range of useful constitutional and conditional manipulations and the first characterizations of the first constitutional lines are presented on this poster. Postdoc Anders Eriksson and intern Klara Krmpotic performed some of the behavioral tests and the monoclonal antibodies are being generated in the lab of Diana Pauly with the help of her graduate student Nicole Schäfer. Bachelor student Julia Dobbert helped with some of the molecular work and postdoc Matthias Raß taught and supervised all of Ottavia’s and Julia’s molecular work.

Posted on November 2, 2018 at 16:39 Comments Off on Dopamine in optogenetic self-stimulation and CRISPR editing of FoxP
Oct31

Automated Linked Open Data Publishing

In: own data • Tags: open data, open science, poster

On the occasion of the first “BigDataDay” at our university, I have summarized on the below poster our two main efforts to automate the publication of our tiny raw data.

On the left is our project automating Buridan data deposition at FigShare using the ROpenSci plugin and the consequence of just sending the links to the data and the evaluation code to a publisher, instead of pixel-based figures, when submitting a paper. Most of this work was done by postdoc Julien Colomb several years ago, when I was still in Berlin.

On the right is our ongoing project of automating flight-simulator data deposition with our library. We designed a YML/XML meta-data format loosely based on the Frictionless Data standard. Our evaluation script reads a YAML file that contains the experimental design (i.e., which raw data file belongs into which experimental group) as well as formalized commands for the kinds of statistical tests and graphs to be generated. From this information, each experiment (i.e., XML file) is evaluated and a quality control HTML document is written that contains numerous aspects of the raw data to ensure the quality of the data in each experiment. The same information from the YAML file is used to compute an evaluation HTML document with group-wise evaluations. All the raw data and evaluation files are linked with each other and the XML files link not only to the repository with the evaluation script, but also to the repository with the software that collected the data and the data model explaining the variables in the raw data files. Ideally by dragging and dropping figures with statistics into a manuscript, published scholarly articles would link to all of the files generated for the publication. A client-side Python script is called upon user login to compare the local project folder and compare it with the folder on the library’s publication server for synchronization.

Posted on October 31, 2018 at 14:18 Comments Off on Automated Linked Open Data Publishing
Sep19

Does academic freedom entail exemption from spending rules?

In: science politics • Tags: academic freedom, infrastructure, publishing

The recent publication of the “Ten Principles of Plan S” has sparked numerous discussions among which one of several recurring themes was academic freedom. The cause for these discussions is the insistence of the funders supporting Plan S that their grant recipients only publish in certain venues under certain liberal licensing schemes.

Germany is likely among the countries with the strongest protection of academic freedom, as article five of our constitution explicitly guarantees academic freedom. Historically, this has always included free choice of publishing venue. As modern internet technology keeps encroaching on academic traditions, there is now a lawsuit pending at the constitutional court of Germany in which way open access mandates, requiring scholars to deposit a copy of their published article in an institutional repository, violates academic freedom. The lawsuit was started by several law professors with support of the DHV (main organization representing academic interests in Germany) and the publishers’ organization “Börsenverein”. It will be interesting to see if the traditional and widespread mandates on publishing in certain venues (by sanction of unemployment) will be brought before the court.

I have asked the law professors behind the lawsuit on their opinion about the current infringements on academic freedom by employers, but did not receive a reply. An independent question to a practicing lawyer specialized in constitutional law was answered with the current requirements being in principle equivalent infringements on academic freedom, as open access mandates, if they even are considered infringements. However, the lawyer advised to wait until the current ruling has been published and that a suit could only be brought by affected parties. In Switzerland, a law professor recently confirmed the conclusion of the German lawyer that both funder mandates and employer requirements can be considered equivalent. In this case, his conclusion was that neither infringe on academic freedom.

Thus, even in Germany, a country with arguably one of the strongest, constitutional protections of academic freedom, it is far from certain if any/all requirements for publication venue may constitute an infringement on these constitutional rights. Ongoing legal proceedings will help clarify this question. As a non-lawyer I would tend to argue that in case open access mandates are considered as violations of academic freedom, also the requirements to publish in certain journals must fall. Conversely, if current practice is no infringement, neither are open access mandates.

For the main argument of this post, however, let’s assume both funder and employer mandates were considered an infringement of academic freedom, i.e., the German constitutional court bans all and every policy to push academics into publishing in certain places, whether that be funder or employer requirements. Would such a strong interpretation of academic freedom automatically entail that the tax-payer has to fund every possible publication venue an academic might choose?

Let’s imagine some amusing hypothetical scenarios. The scenarios are not meant to be realistic, but exemplify the difficulty to negotiate individual freedoms with responsible spending of public funds:

  1. A group of ecologists makes an exciting discovery about local wildlife and they decide to exercise their academic freedom and publish their discovery by buying billboard space across the region, to alert the general public of the precious finding.
  2. A group of biomedical researchers finds a novel drug target for a deadly disease and they decide to exercise their academic freedom and publish their discovery by publishing it in double page advertisements in major newspapers to make sure every drug maker in the world receives enough information to develop the cure.
  3. A group of social psychologists discovers that cluttered environments promote racist language and theft. They decide to exercise their academic freedom and publish their discovery in a prestigious subscription journal. The collective price of the subscriptions to this journal averages to about 100 times the technical publishing costs of the article. On top of the exorbitant price tag, the journal paywall and policies limit both access to the research and the data upon which the publication rests.
  4. A group of geneticists discovers a new mechanism of inheritance and they decide to publish their results in a prestigious journal. The journal recently flipped from the subscription model to author-pays, ‘gold’ open access. Because their chosen journal is highly selective, which is the basis of its prestige, the author-side fee is 200 times that of the technical costs of publishing the article.

Obviously, all four of these scenarios are ‘batshit crazy’ and nobody in their right mind would even try to defend any of these against tax-payer (or general accounting office) scrutiny or try to align them with university spending rules. And yet, scenario number three is current reality and number four may soon be reality, if Plan S and other such funder policies that support ‘gold’ open access were to become standard practice (see here for more reasons why this aspect could lead to severe unintended consequences).

Arguing from a strong point of academic freedom, would one, therefore, to be consistent, require that all scenarios should be funded by the tax-payer, or none? If all should be funded, where should the funds come from to pay even the most outrageously extravagant venues academics might choose? If none ought to be funded, what are the rules based on which these decisions are to be made?

Squaring constitutional rights with public spending is not an easy task. Since I am no legal scholar by any stretch of the imagination, I would tend to argue with the common notion that my freedom to swing my arms ends at your nose (a saying I learned from Timothy Gowers). Publicly funded institutions commonly have to obey strict spending rules. This has a long tradition as this document from 1942 shows:

The awarding of contracts by municipal and other public corporations is of vital importance to all of us, as citizens and taxpayers. Careless and inefficient standards and procedures for awarding these important community commitments have increased unnecessarily the tax burdens of the public. To secure a standard by which the awarding of public contracts can be made efficiently and economically, and with fairness to both the community and the bidders, the constitutions of some states, and the statutes regulating municipal and public corporations provide for the award of public contracts to the lowest responsible bidder.

Who would argue that academic freedom should exempt academics from such spending rules? On the contrary, shouldn’t these spending rules require public institutions to find the most cost-effective way to fulfill their mission, regardless of what venue academics might prefer to publish in? This latter consequence would entail that buying subscription access to publicly funded scholarly works does not qualify as a cost worth spending public money on. How can institutions escape such violation of their spending rules while simultaneously allowing their faculty to exercise their academic freedom?

Here, I suggest that the current, rational, modern resolution of the conflict between academic freedom and spending rules is to provide academics with a cost-effective publishing infrastructure and provide them with the freedom decide whether they want to use it or not. The infrastructure would be maintained by the institutions and either serviced by them or by bidding contractors, as any other infrastructure. Scholars have the choice to either use this infrastructure at no cost to them or find funds to choose any other venue. Given the current abysmal state of publishing functionality, together with the extinction of existing journals without subscription funding, a quite rapid shift in publishing culture would be expected.

As current subscription spending is roughly ten times of what is needed to keep publishing going at current levels, so one would not expect much of a disruption from obeying spending rules also in academic publishing. On the contrary, 90% of current funds would, if these spending levels were sustained, help improve upon current publishing user experience and help innovate to implement a modern infrastructure that services not only our text, but also our data and code.

Posted on September 19, 2018 at 14:08 3 Comments
May24

After 24 years, when will academic culture finally shift?

In: science politics • Tags: behavior, infrastructure, publishing

It’s now been 24 years since Stevan Harnad sparked the open access movement by suggesting in his “subversive proposal” in 1994 that scholars ought to just publish their scholarly articles on the internet:

If every esoteric author in the world this very day established a globally accessible local ftp archive for every piece of esoteric writing he did from this day forward, the long-heralded transition from paper publication to purely electronic publication (of esoteric research) would follow suit almost immediately.

Since then, we have been waiting on the behavior of scholars to change, such that all our works indeed become accessible. This is what has become known as the “culture shift” in academia, without which no actual change in our practice can happen. However, no such change can be seen, not even after all these years. Instead, open access mandates and other policies have been developed to force scholars to perform certain behaviors they wouldn’t otherwise do. Even in fields where such deposition of articles has become common, the authors still adhere to toll-access publishing not for reading or scholarly communication, but for career advancement – an obscenely expensive and perverse outsourcing practice.

Why does such behavioral change take so long? Many of Stevan’s colleagues at the time have since retired and a large section of the scholarly workforce has been replaced with a new generation, one would think – if anything – more net-affine than the previous one?

In this post, I will try to make the argument that our mistake was to expect behavior to change when the reasons for the behavior have not changed. As a behavioral neuroscientist, I have learned that, all else being equal and depending on time-scales, among the best predictors of future behavior is past behavior. Thus, if we analyze why scholars behave the way they do with regards to open scholarship, we may be more likely to affect that behavior.

Why isn’t everybody using preprint servers? What keeps people from posting their data and code on any of the proliferating repositories? What is the reason, funders feel they need to use mandates to get scholars to comply with open science ideals? Why are the non-activist, regular scholars either lethargic or outright hostile?

In the last decade, it seemed as if the answer to this question was “because of the reward system!” or “because incentives are missing!”. As if scholars only ever do anything if they are rewarded or incentivized for it. I think the answer lies elsewhere. It can be articulated as two main reasons:

1) They do not care and hence do not know: scholars care about their scholarship and they shouldn’t have to care about such questions. These questions are exactly what infrastructure should be taking care of, not scholars.

2) They have good reasons to close their scholarship: lack of time, fear of competition, privacy concerns, etc.

I’m simplifying, of course, but would nevertheless tend to argue that together, 1+2 explain most of scholars’ behavior wrt to open scholarship. Those who care, know and do not have good reasons to be closed are people like open scholarship activists, e.g., yours truly. The other 99.5% are the ones who resist “culture change” for either reason 1) or reason 2) or both. I’m rather skeptical anything can make either 1) or 2) go away any time soon, let alone both.

Hence, rather than fighting 1+2, as we have been for two decades now, I suggest to use them in our favor.

A recent poll in our biology/medicine department exemplifies how this might work: when polled which software the department members are currently using to prepare images (microscope images, gel pictures and such) for publication, the majority answered “PowerPoint”. Now, I assume that most everybody on here would understand that PowerPoint is not the, ahem, ideal, most professional software to use for these kinds of work. 🙂 On the contrary, submission in PTTX format is explicitly discouraged by most publishers. This means that the majority of people in our department use a tool that is not the most professional for the task at hand and the format of which is discouraged for submission. What funder mandates could be in place to encourage such odd behavior? Which tenure committee rewards compliant over superior tool use? Where is the academic incentive system that pushes scholars to choose PowerPoint over the better-suited alternatives?

Obviously, there are no mandates or tenure committees incentivizing the use of such suboptimal tools. Scholars are doing this entirely on their own accord. Why on earth would educated people do something like that? The answer is straightforward: for the same reasons 1) and 2) above! Most faculty don’t care and hence don’t know: they use what comes on their computers, pre-installed by the university. Or they have a good reason to use this tool: it’s good enough for them and they don’t have the money for Photoshop or would rather spend the money for experiments than software. Or they find ImageJ too hard to use as they are already familiar with the ubiquitous PowerPoint and can’t be bothered to switch. Or installing new software is just too much of a hassle. Et cetera.

With this example in mind, how do you get scholars to choose open publishing alternatives over legacy publishers? How do you get them to use open evaluation procedures over impact factor? How do you get them to save their data to a repository, rather than on their thumb drive? You provide them, automatically, free of charge and ready to use, with the tools you want them to employ, with the default settings (i.e, open) you prefer. The large majority who doesn’t care and hence doesn’t know will just use what’s convenient, quick and free, so they can focus on what matters most to them: their scholarship. Those who have good reasons to make their work closed will balance these against the potential negative consequences (e.g. more time and effort, potential suspicions if everything else is open, etc.) and be able to make their work as closed as it needs to be for them. Of course, ideally, such tools come with their own reasons why one would want to use them, such as increased efficiency over legacy tools or new, more and better functionalities. Conversely, equally obviously, you stop providing scholars with anything you don’t want them to use, such as subscription journals. Or impact factors. Or typewriters.

Since subscriptions globally run at about US$10bn every year, and the technology for a scholarly commons can be had off the shelf, the kind of modern infrastructure that would get scholars to change their behavior only needs to be bought with the funds saved by subscription cancelations. As such an infrastructure would provide scholars with a superior toolset, it would also add ‘efficiency’ and ‘functionality’ to reason 2) as to why scholars are using this new, open infrastructure.

Posted on May 24, 2018 at 11:56 6 Comments
Apr13

Why open access Big Deals are worse than subscriptions

In: science politics • Tags: open access, publishers

Notwithstanding the barrage of criticisms and warnings from every corner of the scholarly community, various initiatives, mainly in the Netherlands, Finland, Germany, France and the UK, continue their efforts for a smooth transition from subscriptions to open access without any further disruptions. The underlying idea is to shift the subscription funds to article processing charges (APCs). While most share the sentiment that subscriptions need to go and the saved funds should be used for publishing, rather than reading, this particular approach has drawn widespread criticism.

The main point of contention is whether this approach is sustainable in the long run. There seem to be unanimous expectations of dropping prices with such a transition. These hopes are mainly based on current estimates of APC prices of around 40% below current subscription costs (calculated as cost-per-article). One may argue that a mere drop of 40% is still a gift to legacy publishers when actual publishing costs are more like 90% below current subscription prices, but this shall not be the main point here.

The main question is: even if publishers eventually agree to a 40% drop in revenue (which seems more likely than not at this point, despite public publisher resistance), how can we keep them from increasing prices to old levels? The initiatives mentioned above are notably silent on how this could be achieved. One hears of rigid price-caps and sincere promises of definitely not paying “outrageous prices”, but so far a clear strategy of what is going to happen if a publisher raises the prices above inflation or even by, say, 40% or 70% has not been made public. I haven’t even heard about a strategy being discussed.

As I see it, there are two main positions these initiatives are taking: for one, there are price caps, above which libraries are not supposed to be paying APCs. The second amounts to simply walking away from any renewal of a contract if the prices become too high. Both cases entail that faculty will have to foot at least some of the bill to publish.

At our institution, we have price caps for our APC-fund and people here simply pay either from their budget or out of their pocket, if the APC is exceeding this cap. Given that APCs scale with journal prestige (Nature Communications being a particularly egregious example with APCs that already exceed average subscription costs per article), it is precisely the most sought-after journals where scholars are most desperate to publish in that will be affected by this policy:

APCs scale with IF, data from two different studies (click for larger picture)

Phrased differently: in the richest institutions, faculty will be able to publish in the most prestigious journals while in the poorer ones, they will have to pay extra for the same glory – or just not be able to publish there at all. In this “APC-capped world”, the rich will stay in science, the poor will have to drop out.

Irrespective of any price caps, with a decades-long publisher track record of double-digit price increases year over year (subscriptions or APCs), every library, then (“serials crisis”) as now, will be faced with budgetary constraints that force them to contemplate dropping the Big Deal for a given publisher. How likely have libraries been to drop subscriptions in such circumstances? Not very, it seems. Despite constant supra-inflation price hikes, libraries have cut other acquisitions, explored additional revenue streams and enlarged their subscription budget over several decades. Faculty demand, it is said, is paramount and hence all publisher demands, no matter how outrageous, have historically been met, until this day, when publisher profits have swelled to 40% and higher. Thus, the fact that faculty need to read has made it virtually impossible for libraries to walk away from subscription negotiations, allowing publishers to essentially charge what they want and getting away with it.

How will this dynamic change once libraries cease to pay for read-access but pay instead for write-access? Without subscription, every article behind a paywall is just a few extra clicks or some wait time further away, than with a subscription. Nevertheless, resistance to subscription cancellations is as high as ever, with some librarians even fearing for their jobs should they cancel subscriptions perceived vital for some of their faculty.

Having lived through several subscription cuts at various institutions, I find this scenario widely unrealistic, but not being a librarian, I have to take such statements at face value. Apparently, if libraries were to cancel read-access through their institutions, forcing faculty to add a few clicks to their reading habits, some of the librarians might risk their jobs. If that is indeed the case, what is a realistic scenario for APC cancellations? What might happen to a librarian if they instead of asking their faculty to spend a few additional clicks for essential articles, blocked their faculty from publishing by not paying APCs anymore? Given the ‘publish or perish’ culture we live in, it is straightforward to assume that should this happen, one very realistic scenario is that of faculty marching on the library with torches, setting fire to the building and tarring and feathering all inside who could not escape.

Obviously, if libraries have extended hyperinflationary subscription Big Deals out of fear of faculty reprisals for decades, they will do so with even more fervor and conviction for Open Access Big Deals, as such cancellations are orders of magnitude worse for their faculty than subscription cancelations.Here is a brief comparison of the subscription scenario, with the Big OA Deal scenario:

Thus, the hopes of transitioning away from subscriptions without some major disruptions are illusionary. In fact, the unintended consequences of such well-meaning efforts will likely be worse than what we have today.

Posted on April 13, 2018 at 13:38 8 Comments
Jan19

Come work with us!

In: news • Tags: position, technician, work

We are looking for a a permanent, full-time technician, arguably the most important position in our laboratory. The main perks that come with the position are that it is permanent and that we are a small group of very enthusiastic colleagues where there is always something different going on. For those so inclined, we also offer the possibility to conduct their own research projects, to the extent the candidate feels comfortable with. There are also no fixed start or end times for our working days: as long as it is in the daytime for the flies, the candidate will be free to schedule their work hours (40.1h per week) for maximum work-life balance. Compensation follows German rules, in this case TVL-E7.

The routine tasks of the position are flexible and limited: maintenance of the Drosophila stock collection, preparation of media, ordering of consumables, preparing flies for behavioral experiments and perhaps some histology or molecular biology every now and then. Support of experimentation is variable and dependent on current projects. This is the area where the most flexibility arises for the successful candidate. Some support of student courses in terms of preparing materials and other technical assistance is expected.

The successful candidate will have an BTA/MTA or an equivalent degree, experience in laboratory logistics and ideally also in insect/Drosophila handling and breeding. Additional experience in IT, molecular biology or other research is advantageous but not required.

While it is helpful to speak German, it is not required. English is the language of our laboratory, so it would be difficult to contribute to our work without speaking the language at least on a conversational level.

The official job advertisement (PDF in German) can be found on the University of Regensburg website.

Posted on January 19, 2018 at 16:02 Comments Off on Come work with us!
Jan16

Why academic journals need to go

In: science politics • Tags: decentralized, infrastructure, journals, standards

In his fantastic Peters Memorial Lecture on occasion of receiving CNI‘s Paul Evan Peters award, Herbert Van de Sompel of Los Alamos National Laboratory described my calls to drop subscriptions as “radical” and “extremist” (starting at about minute 58):

Regardless of what Herbert called my views, this is a must-see presentation in which Herbert essentially presents the technology and standards behind the functionalities I have been asking for and have been trying to get implemented for the last decade or so. Apparently, where we differ is only that I actually want to use the functionalities and concepts he describes in his presentation and, consequently, I am naive or idealistic enough to think of ways to get there. If this makes me a radical, so be it: radix is Latin for ‘root’ and I try to tackle the root of our problems.

Right before he talks about me, he also mentions David Lewis’ 2.5% Commitment, which I also support. In Cameron Neylon’s critique of Lewis’ approach one can find an important realization that bears quoting and repeating as it is one of the main obstacles why Herbert thinks we will never have the tools he describes in his presentation. Cameron writes:

That in turn is the problem in the scholarly communication space. Such shared identities and notions of consent do not exist. The somewhat unproductive argument over whether it is the libraries responsibility to cut subscriptions or academics responsibility to ask them to illustrates this. It is actually a shared responsibility, but one that is not supported by sense of shared identity and purpose, certainly not of shared governance. And notably success stories in cutting subscriptions all feature serious efforts to form and strengthen shared identity and purpose within an institution before taking action.

Cameron very astutely dissects one of the main sociological issues holding us back: scholars do not share a common identity any more, just as librarians and faculty do not and just as different scholarly institutions do not share an identity with each other. So pernicious an effect have the neoliberal mantras of “competition” and “every man for himself” had on scholarship, that it has all but completely disintegrated into either warring factions or competing careerists. University rankings provide a clear metaphor for scholarly institutions as players in a competition for whatever the neoliberal ideologues want them to compete for: funds, human resources or prestige (aka. the scholarly fetish “excellence“). If you talk to current university presidents, deans or provosts or read what they have written, it seems as if most of them have completely absorbed the neoliberal cool-aid and made themselves the defenders of individuality, competition and external ‘incentives’, with the underlying assumption that without those concepts, everybody in academia would just sit in their comfy chairs and collect tax funds, twiddling their thumbs. Apparently, carrots and sticks are the only way to squeeze excellence out of otherwise lazy, selfish and parasitic scholars. Ironically, election data suggest that a large section of these scholarly politicians, if they are representative of their academic peers at large, may go on to vote for left-of-center parties or candidates who vow to combat exactly the neoliberal policies they so ardently defend in their day jobs.

Be that as it may, apparently even for an advocate and expert like Herbert, asking scholars to cooperate in order to achieve a greater, public good, has now become sufficient grounds to label someone who strives for cooperation as a “radical” or an “extremist”. If indeed he is correct and in 2018 asking scholars to behave cooperatively, rather than competitively, is something so exotic and outrageous, scholarship has deserved the state it currently is in.

These thoughts have reminded me of an old cartoon I’ve been showing in many of my presentations. Now, I’m posting a disambiguated version of the cartoon (sorry, I can’t provide a source for the cartoon, created it from a photograph I once was sent) that I hope explains in an entertaining way why dropping all subscriptions and buying Herbert’s solutions from the money instead isn’t extreme at all (click for larger image):

All scholars and those working to support scholars share a common identity. Cameron is spot on in that all too few are realizing that we all strive for better scholarship, for more knowledge. Acquiring knowledge for its own sake is one of the very few behaviors that humans do not share with other animals and all scholars share a particular enthusiasm for knowledge. In fact, the German word for scholarship is “Wissenschaft”, literally translated with “knowledge creation”. In this argument, it doesn’t matter if scholar A is at institution X and librarian B is at institution Y – they are all scholars.

I must assume (not having been there) that this sense of communalism (to use Merton’s term) and shared identity (to use Cameron’s term) must have been much more prevalent in the early 1990s when institutions invested in routers, cables, computers and other hardware (and time!) for something that nobody knew what it could do: the WWW. I often wonder what faculty would have said around 1992 (first time I had an email address), when asked by a computing center employee: “wouldn’t you like a new service, let’s call it ’email’ by which your students could reach you 24/7?”. I would tend to believe that if that had been the mindset of infrastructure experts at the time, we would not have any internet today.

Instead, infrastructure experts at the time embraced the new technology, were competent enough to realize which standards worked and would be sustainable long into the future and started spending some serious money – regardless of whether faculty expressed any interest in using any of this technology. In contrast, today, we stand to save money from adopting the standards Herbert talks about and yet thinking about how to practically achieve adoption of such common standards is grounds for being labeled an extremist. How dare I suggest implementing modern technology without asking faculty first! Today, we have similarly competent experts like Herbert, but they seem to despair, expecting this modern technology to never arrive for scholarship, instead of doing what their predecessors have done: embrace the new technology and the potential it brings and implement it. What a difference 25 years make: the common good was sufficient cause for spending money in the 1990s, when today it is seen as ‘extremist’ just to try and save money while promoting the common good.

Today, librarians and other infrastructure experts dare not implement modern technology without fear of reprisals: after all, faculty are not colleagues any more who share a common identity, they are customers and librarians are service providers in this corporation only called ‘university’ for dusty historical reasons. Clearly, single institutions cannot act without risking league table standings or the competitiveness of their labor force. Everyone is busy chasing prestige in an absurd artificial competition where “excellence” is the only thing that counts, but can’t itself be counted. Some of Monty Python’s most absurd sketches appear rational in comparison.

When done competently, dropping subscriptions today doesn’t risk anybody’s livelihoods or league standings any more. Thanks to a growing set of tools, journals remain accessible during the transition period. The old adage “everybody who needs access has access”, once used to resist open access campaigning, has finally become true – without subscriptions. We just need to take advantage of the new circumstances. After the transition, nobody needs access to journals that do not exist any more, so the enabling properties of this toolset are decisive here and the set not comprising a solution becomes irrelevant: we have the solutions, as Herbert so eloquently explains. What we need are enabling technologies. We have those now, too. Most journals won’t survive being cut off from all funding.

And yet, faculty continue to chase journal spots as vehicles for their discoveries from which they hope to harvest sufficient prestige just to keep going. Without removing this source of prestige, faculty and students/postdocs have little other choice than to reject the better vehicles we now could offer. This is the main reason why 25 years of campaigning for scholarly infrastructure reform have barely brought scholarship to embrace the web of 1995 (in the words of Jon Tennant). Journals, the square wheels, are the main physical obstacles to the technology Herbert describes in his presentation. They need to go. That is a rational solution that targets the root of the problem. If scholars can find the Higgs Boson, I’m confident they can find other sources of prestige once journals have ceased to exist – should they decide that chasing prestige is a functionality they wish to replicate.

Coincidentally, journal subscriptions also usurp most of the funds required for implementing Herbert’s solutions – the round wheels. Canceling subscriptions hence serves two main purposes: removing the main obstacle for scholars using modern information technology and freeing up funds to implement said technology: removing the square wheels and replacing them with round wheels. Subscription journals are the keystone in the current scholarly communication arch: remove them and it all falls apart. Any journal-like functionality that scholars value is easily recreated with modern technology, but with new functionalities and few, if any, of the current disadvantages and unintended consequences.

Finally, with scholars so busy chasing excellence, chances are slim to none they will ever ask for round wheels, as so many librarians I speak to seem to hope for.

Posted on January 16, 2018 at 15:49 7 Comments
Nov29

Is a cost-neutral transition to open access realistic?

In: science politics • Tags: open access, strategy, transition

Current estimates for the cost of subscription articles converge around US$5,000 per article. This number is reached by dividing the estimated US$10b spent on subscriptions annually world-wide by the two million published articles every year. Current initiatives aiming for a transition from subscriptions to gold (article processing charges, APC-based) open access emphasize that the transition has to be cost–neutral.

How realistic is a cost-neutral transition from subscriptions to open access?

There is ample material on how much the publication of a scholarly open-access article costs. SciELO publishes for under US$100 per article. SciELO, however, publishes largely in regions of the world where labor and other associated costs are comparatively low. How much costs do other organizations cite? The Open Library of Humanities is on the record with about US$500, Ubiquity, Hindawi and PeerJ are also on record within the US$100-500 range and Scholastica, RIO Journal, Science Open, F1000Research or arpha mention similar figures. Thus, there are now about ten different companies or organizations which all agree that open access publishing costs about US$100-500 and Independent analysts agree: it is established that the actual costs of publishing a scholarly article are in the low hundreds of US$.

Thus, the maximum we could theoretically get out of a transition deal are about 90% savings as an upper bound for potential savings. This would result in about US$9b annually which we could, for instance, invest in modern technology for our publication infrastructure. If there is such a huge potential for savings, why do all these open access initiatives only aim at a “cost-neutral” decision? Surprisingly, when planning their transition, these initiatives never looked at costs, only at current prices. Current open access prices (for those journals where they are charged) come to lie around US$2500 and US$3500. Because the most expensive journals such as, e.g., Nature (which is on record to have to charge US$50,000 per article to maintain current revenue levels) are not yet open access and prices are expected to rise over the time of the transition period and beyond, these initiatives calculate with higher APCs.

There are three main components as to why legacy publishers are charging so much:

  1. Inefficiency – with a profit margin of 40% and a market that carries essentially any price increase, there is little pressure to cut down on costs
  2. Paywalls – legacy publishers have to maintain a huge infrastructure related to preventing access. The internet was designed to enable access, not to prevent it, so the technical and administrative challenges are huge. It is thus credible when publishers routinely justify their scheme of big deals, which enclose all of their journals, to be cheaper than any smaller selection of their journals, by the increased costs of making sure they can accurately distinguish whether a user from university X has access to journal Y and not journal Z. The frequent mishaps with paywalls for nominally open access articles are a testament to how difficult and hence expensive something like this is to maintain. Of course, therefore, it makes great business sense for a company that still has paywalls, to have all revenue contribute to these costs, even open access revenue. “Double-dipping” is a normal, sound business practice which contributes a fair amount to the high open access prices (APCs) we pay today.
  3. Profits – legacy publishers are accustomed to profit margins of about 40%, i.e., about US$2,000 of the US$5,000 they are collecting per subscription article, on average. So these kinds of sums are added to any cost-based APC of legacy publishers.

Even for publishers without paywalls it may be tempting to just ride on the wave of high APCs just because the customer-base provides for that sort of revenue. For instance, Emerald recently increased the APCs by ~70% merely for such competition-based reasons.

From this list one can estimate a lower bound of what a transition to open access should at least yield, if the transition was totally botched and these initiatives didn’t get any advantage at all into their scheme. Let’s assume legacy publishers are half as efficient than their modern competitors, so their cost would be US$1000 per article and there would, as now, still be no competition that would force increased efficiency. Let’s hypothetically assume the public purse is generous enough to leave current profit margins at outrageous 40%, then the APC price would amount to US$1,400 per article. There are no more paywalls to cross-finance so these costs fall by the wayside.

Thus, from these calculations, the worst case scenario for a transition to open access is savings of about 72% and the best we could do is about 90%. Anything worse than 72% is a free, tax-payer funded gift to an international oligopoly. Why would the scholarly community support such a give-away?

Hence, what everybody should be asking of these initiatives is why they are campaigning for a cost-neutral transition to US$5,000 per article with legacy publishers, when there are many competitors that would perform the same service for 500€? How can they justify a tenfold cost to the taxpayer in favor of big, international corporations (does anybody know if any of their names have cropped up in the Panama or Paradise Papers, btw?), to the disadvantage of smaller, modern publishers? What are their reasons to prop up a legacy industry with an obscene waste of tax funds? What service could these legacy publishers possibly perform that one may use to explain to a non-scientist tax-payer why they should pay ten times as much?

A different question is, of course, how to achieve this reduction. For now, only one component stands out as crucial for the transition: the exchangeability of service providers. Only this exchangeability allows for actual competition to put pressure on costs. One of the many afflictions of our current system (and one shared with APC-based gold open access) is that publishers cannot easily be switched, as every article resides only with one company. Merely switching subscription funds to APCs would not make the service providers exchangeable, because journal rank would still force authors to publish where their field dictates them to, preventing competition. One of (hopefully) several solutions would be a shared publication infrastructure where service providers are chosen in a bidding process and can be replaced if their services become too expensive or their cost-effectiveness drops.

Posted on November 29, 2017 at 13:46 7 Comments
Oct05

The scholarly commons: from profiteering to servicing

In: science politics • Tags: bidding, infrastructure, publishers, services

These days, many academic publishers can be considered mere Pinos: ‘Publishers in name only’. Instead of making scholarly work, commonly paid for by the public, public, as the moniker ‘publisher’ would imply, in about 80% of the cases, they put them behind a paywall. As if that weren’t infuriating enough, profits and paywall costs add up such that the final cost to the taxpayer is tenfold higher than if each article were just made, you know, public.

The only reason scholarship is in this embarrassing calamity is historical baggage. Nobody in their right mind would construct scholarly communication in the current way if they had to design it from scratch.

So how would one design our scholarly communication infrastructure from scratch, without historical baggage? To do that, one would have to start by defining the basic functionalities of this infrastructure. Importantly, the infrastructure would have to cover all of scholarships output: our narratives (text, audio, video) as well as our data and code. Current technology should save scholars time and effort when reading, writing, citing as well as assist data collection and analysis both from the data and from the code end. Given that what most of our institutions are offering us today still remains at the technical level of early 1990s technology, the move to current technology should cover most of these desired functionalities.

While modern information technology may be cheap and getting cheaper every day, it isn’t free. The money has to come from somewhere. At what scale could this infrastructure be estimated to come to lie? The UN estimates scholarship at around 7 million full time equivalents, so that is the ballpark figure of users to be served, probably a few million more for the part-time scientists. Researchgate claims to have about 11 million users, so that fits within this ballpark. Compared to the billions of Facebook users, the hundreds of millions of Instagram or Twitter users, this is technical peanuts. A service that serves this size of user base is not facing any major technical issues. Instagram cost Facebook US$1 billion, Researchgate runs on about 50 million, Twitter is estimated to be worth about US$15 billion. Given the size of the scholarly user base, a scholarly infrastructure would probably cost somewhere towards the lower end of this scale to acquire and much less to run. Much of the scholarly functionalities that would be missing in an off-the-shelf social platform already exist either as open source solutions or in various initiatives, start-ups or also as conventional software solutions. Hence, it is probably safe to say that about 1-3 billion US$ would buy us a rather luxurious solution to all our problems off the shelves of currently available technological merchandise.

The running costs for our current journal-based ‘publication’ system are about US$10 billion annually. We know from various sources that actual costs of making these works public are around or less than US$1 billion per year. Thus, if some unfortunate event would force us to redesign scholarly communication from scratch, we’d only need 10% of our current spending to keep the basic article publications running, the way we do now (just with every article being truly public). Conversely, we’d have US$9 billion annually for innovations, data and code, if we keep infrastructure spending at current levels.

Of course, crucial for any such system is governance. Geoffrey Bilder, Jennifer Lin and Cameron Neylon have provided an excellent outline for governing the scholarly commons. Besides governance, a second prerequisite for the scholarly commons is a organizational framework that keeps costs low but provides space for innovation. The last century has provided some rather convincing evidence that well-designed markets can provide precisely such a framework.

Historically, we have not enjoyed such a market. Pino profit margins exceeding 40% are only realizable precisely because there is no competition: almost every article exists only once with any given Pino (at least the legal copies). Hence, each Pino had the de facto monopoly for that article and could charge whatever its customers were able to pay.

However, if the scholarly work instead remained in the hands of scholars, within the scholarly commons,  then companies could compete with each other for the best services, the most convenient and innovative functionalities around this scholarship. Institutions would be able to leverage tried and tested bidding procedures to stimulate competition and have a choice of competitors. Alternatively, institutions could decide to invest (part of) their infrastructure funds into in-house expertise and put pressure on companies to provide better value for money than the in-house services. In other words, such a framework for the scholarly commons would afford institutions the same kind of leverage and range of choices and strategies as they enjoy for any other infrastructure, be it IT hard/software, HVAC, electricity, water, etc.

For the past few years, several initiatives and organizations have started to implement such a framework. For instance, the Wellcome trust has launched Wellcome Open Research, a platform for publishing Wellcome funded researchers. Currently, F1000Research runs the technology behind this platform, but that may change in the future, if better competitors come along – without any user necessarily having to notice anything changing. Scholarly societies which run their own journals are starting bidding processes for the service providers to run their journals. The Open Library of Humanities is running their journals on fixed-term contracts with clauses that allow the journals to switch providers if OLH is not satisfied. All of these examples show that this type of framework is both currently in use and viable. Wherever costs are known, they come to lie around the 10% figure given above, i.e., the organizations running these journals or platforms are saving about 90%, compared to legacy Pino solutions. However, most current journals are owned by publishers, preventing any switch in service providers.

If current Pinos really cared as much about scholarship as they keep emphasizing, they would get on board with these more recent developments, maybe help develop the scholarly standards needed for a scholarly commons, and offer their services around these standards. Interestingly, eLife (not a Pino) recently announced a collaboration to start developing the core of such open standards. Pinos, on the other hand, indicate through their acquisitions, lobbying, visions and policies their ongoing efforts to cement current profit margins and to prevent or stall the transition from profiteering to servicing.

Posted on October 5, 2017 at 16:30 2 Comments
  • Page 7 of 22
  • « First
  • «
  • 5
  • 6
  • 7
  • 8
  • 9
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,174 views)
  • Sci-Hub as necessary, effective civil disobedience (23,069 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,534 views)
  • Booming university administrations (12,921 views)
  • What should a modern scientific infrastructure look like? (11,489 views)
  • After decades of debating the “scientific publishing crisis”, the time has come to decide.
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous biting in the marine snail Aplysia
Spontaneous biting in the marine snail Aplysia

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).