bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 26 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 138 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 440 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 701 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1891 downloads 0.00 KB
Download
May31

Improved Plan S principles raise hope of more effective policies soon

In: science politics • Tags: infrastructure, Plan I, Plan S

Yesterday, cOAlition S published their updated principles and implementation guidelines for #PlanS, together with the rationale behind the update. This constitutes a very much welcome effort, as evidence of the increasing awareness among funders as to their potential leverage in infrastructure modernization, at a time when institutions have apparently abandoned their faculty completely.

These policies would have been a much-needed initiative about eight years ago, when there was still a problem of access to the scholarly literature, when Unpaywall didn’t exist and sci-hub had just launched. Today, we have so many ways to access our literature, that these policies seem more like beating a dead horse. From this perspective, Plan S targets the wrong actors (individuals rather than institutions) to achieve a minor goal (Open Access), when our infrastructure rewards unreliable research, costs ten times too much and lacks crucial functionalities. The three components of the scholarly infrastructure emergency (reliability, affordability and functionality; RAF) remain untouched by Plan S while a (today) minor problem receives more attention than it deserves. In a way, Plan S seems more like a band aid for a small cut on a hand while the large, malignant tumor remains untreated.

It is in the power of cOAlition S funders to set policies that would tackle the RAF tumor/emergency and help solve the minor access problem as an added benefit: require institutions to provide modern scholarly infrastructure before any research funds can flow. Given the overall aim and language of Plan S, one can remain optimistic that cOAlition S will deliver such more adequately targeted, modern and hence effective policies in the near future. Perhaps we can read about a “Plan I” (for infrastructure) in the near future?

Like this:

Like Loading...
Posted on May 31, 2019 at 10:55 Comments Off on Improved Plan S principles raise hope of more effective policies soon
May22

Unpersuadables: When scientists dismiss science for political reasons

In: science politics • Tags: evidence-resistance, questionable research practices, unpersuadables

Scientists are used to vested interests disputing scientific claims. Tobacco corporations have tried to discredit the science about lung cancer and smoking, creationists keep raising always the same, long-debunked objections against evolution, climate-deniers claim the earth is cooling, anti-vaxxers believe the MMR vaccine causes autism and homeopaths delude themselves that a little drop of nothing has magical healing powers. No amount of evidence will convince such “unpersuadables”.

What receives less attention, though, is what may be a more severe problem for science, namely the dismissal of science by scientists – unpersuadable scientists.

Documenting this phenomenon is difficult, because the dismissal of science is only rarely uttered in writing or in public. One would tend to hope that this is an indication that such behavior may be rare. I am aware of only two instances. One is the now infamous blog post by Brian Wansink, and another are the public statements by decision-makers at a German conference documented elsewhere. Recently, I have witnessed a third instance.

At a dinner table with about ten participants, all academics, the discussion entered the topic of ‘quality’ academic journals and journal rank. When I referenced the data showing that higher journal rank tends to be associated with lower experimental reliability, several individuals mentioned that they find these results hard to believe. When I asked about data to the contrary which may be the basis for their hesitation, the participants only emphasize they had no other data, just their “intuition” and “gut feeling”. When I asked what they do when their own experiments yield data that go against their intuition or gut feeling, one professor exclaimed: “I tell the postdoc to do the experiment again and a third time if need be!”. When I expressed my shock at such practices, the two most senior professors, one of whom once a university president and both medical faculty, emphatically accused me of being dogmatic for giving primacy to scientific evidence, rather than intuitions or gut feelings.

Recent evidence points towards published scientific results, at least in some fields, being far less reliable than one would expect. If it were common that the reliability of science hinged on postdocs standing up for their data against the gut feeling of the person who will write their letters of recommendation and/or extend their contract, we may have to brace ourselves for more bad news coming from the reproducibility projects being carried out right now.

Wansink was trained in marketing and had no idea about science. His lack of training and incompetence in science may be an excuse for his behavior. These two individuals, however, have graduated from medical school, have decades of research and teaching experience behind their belt and one of them even complained that “most of the authors of the manuscripts I review or edit have no clue about statistics”. Surely, these individuals recognize questionable research practices when they see them? Nevertheless, similar to Wansink, they wouldn’t take a “failed” experiment for an answer and similar to the decision makers at the meeting in Germany in 2015, they would put their experience before peer-reviewed evidence.

If scientific training doesn’t immunize individuals against unscientific thinking and questionable research practices, how can we select a future cadre of decision-makers in science that do not put themselves before science and that will implement evidence-based policies, instead of power-brokering their way to new positions? There is a recent article on “intellectual humility” – maybe this is the way to go?

P.S.: There are more instances of scientists publicly dismissing evidence to the contrary of their own belief: Zimbardo, Bargh spring to mind and I’ll look for others.

Like this:

Like Loading...
Posted on May 22, 2019 at 15:20 3 Comments
May02

New OSTP director caught lying in interview

In: science politics • Tags: Droegemeier, OSTP, politics

With about 2 years delay, US President Donald Trump hired Kelvin Droegemeier as director for the White House Office of Science and Technology Policy (OSTP) in February 2019. In a recent interview Dr. Droegemeier made the following broad, categorical statement in response to a PLan S question:

One of the things this government will not do is to tell researchers where they have to publish their papers. That is absolutely up to the scholar who’s doing the publication. There’s just no question about that.

[…]

This goes back to the president’s fundamental philosophy of let’s not shackle people, let’s let them be free to choose where they publish. Let’s not put any administrative constraints on them.

As Hanlon’s razor requires to first rule out incompetence before attributing malice to any action, one needs to assess the competence of Dr. Droegemeier with regard to such sweeping policy statements. On his Wikipedia entry one can find numerous political appointments for various policy posts since 2004. It is thus fair to assume that Dr. Droegemeier has a fair bit of experience and expertise with regard to the long-standing policies of the US government. Moreover, previous director Holdren and several other scientists have spoken out in favor of Dr. Droegemeier’s appointment. Attributing any false statement to incompetence would thus seem unlikely, even if he was appointed by Trump.

It is thus straightforward to conclude that the above statement was a lie with the purpose of scoring political points, as his statements are very easy to verify. For instance, one can have a look at job advertisements of a federal agency, such as the NIH, top see if they contain any language that implies that the government may “tell researchers where they have to publish their papers”. Such statements are, of course, easy enough to find. Here are two examples:

In this ad for a postdoctoral position in the NIAID/NIH Translational Allergic Immunopathy Unit it lists under expected qualifications: “a track record of publication in peer-reviewed journals”. Clearly, you are not welcome at the NIH (“this government”) if you haven’t published in these particular venues.

This ad for a postdoctoral position at the NCI/NIH cancer center goes even further. It makes the explicit expectation that the candidate, once hired, continues to publish in such journals, as the candidate “is anticipated to complete this study with one or more high-impact publications in peer review journals.”

Clearly, one of the things this US government is doing is precisely telling researchers in no uncertain terms where they have to publish their papers if they want to be/stay hired by this government – in a clear contradiction to what Dr. Droegemeier has stated in the interview. Surely a man with such ample experience in science policy is aware of the policies of the US’s largest scientific agency?

Like this:

Like Loading...
Posted on May 2, 2019 at 12:31 Comments Off on New OSTP director caught lying in interview
Mar29

New England Journal of Medicine – and you thought Nature was expensive?

In: science politics • Tags: publishers

The New England journal of Medicine has come out strongly against Open Access. Apparently, this journal does not seem to value access to medical information very highly. This lack of valuation could be due to several reasons. For one, the NEJM is leading the medical publishing industry in retractions:

source

From this perspective, it would make sense to not endanger the public with highly unreliable medical information until it has been properly vetted.It could also be that the journal thinks that science (or medicine in this case) either goes their way or the highway. This interpretation would be supported by their 2016 editorial, in which they branded every scientist, who used scientific data that they had not collected themselves for their research as “research parasites“. Now there are eponymous awards.

Finally and perhaps equally likely (the reasons are, of course, not mutually exclusive) it could simply be about money. In their public 2016 tax return (does using their numbers here make me a research parasite, too?), the NEJM lists the following items, all p.a. and in US$:

Publication revenue: 103,145,834
Revenue less expenses: 3,896,197
Salaries:

  1. Editor in Chief: 702,324
  2. Vice president, publishing: 472,499
  3. Vice president/General counsel: 417,405
  4. Executive editor: 393,059
  5. Former executive vice president: 383,007
  6. Executive vice president: 380,480
  7. Executive director, global sales: 368,254
  8. Executive deputy editor: 328,461
  9. Deputy editor: 321,468
  10. Vice president, Finance: 321,053

Salary sum: 4,088,010

These numbers have to be put into perspective within the wider publishing ecosystem. In contrast to this blog post, publishing of research articles costs serious money. To make a peer-reviewed manuscript public, total costs of several hundred US$/€ accrue. These costs are fairly similar across publishers, see, e.g., this overview. Other publishers confirm these numbers:

According to Web Of Science, the NEJM (2016 Impact Factor: 72.406) has published 328 citable items in 2016. Of course, they have published many more articles that can in principle be cited (1,304 of those, to be exact), but while all their citations are counted in the numerator of the impact factor (IF) calculation, these 1,304 “missing” articles are not counted in the denominator. This is common practice and each journal negotiates what gets counted with Clarivate Analytics, the company that publishes the IF. It is reasonable to assume that only citable items contain valuable scientific or medical research for which the public would have to pay to have them published in an Open Access scenario, while the other articles are opinion or news articles, either commissioned or written by in-house staff.

Following the reasoning put forth by publishers such as SpringerNature, that Open Access publication charges should provide publishers with the same publication revenue as the current subscriptions, in 2016 the public would have had to spend approx. US$103,145,834 for the 328 research-type articles NEJM publishes annually.

This would amount to an article processing charge (APC) for NEJM of around US$314,000.

Or, phrased differently, the current business model of NEJM entails the tax-payer paying more than US$300k for each research article in NEJM, which, at the same time:

  • pays their management staff the 3-7 fold income of one of their professor-authors
  • for each research-type article, cross-subsidizes about four other news-type or opinion articles, some of which insult scientists
  • pays for the rejection costs of 95% of all submitted articles
  • overpays the actual publishing costs by about 1,200-fold

In comparison, the 30k€ estimated to be paid to Nature branded journals look like a downright bargain. When one compares cost per IF point, one can see that for, e.g., Nature itself, an author would pay about 1,000-2,000€ per impact point, while they would have to pay more than $4,000 per impact point for an NEJM article.

While both journals are leading the industry in terms of unreliable science, one could also, from an author’s perspective, conclude that shoddy science may be less likely to be detected at Nature, definitely another plus if one wants to get published but doesn’t want to get caught.

But then again, at many medical departments and schools, what counts isn’t efficiency nor reliability, but raw IF power. In that case, there would be little other choice than to add 300k to the next grant application, in case NEJM ever went APC-OA, despite this recent editorial.

P.S.: Alternatively, NEJM authors could just get a Rolls Royce instead.

Like this:

Like Loading...
Posted on March 29, 2019 at 13:41 2 Comments
Mar01

How publishers keep fooling academics

In: science politics • Tags: costs, price, publishers

Time and time again, academic publishers have managed to create the impression that publishing incurs a lot of costs which justify the outrageous prices they charge, be that US$11M p.a. for an Elsevier Big Deal subscription or an article processing charge (APC) of US$5,200 for a Nature Communications article.
This week, again, an academic publisher, SpringerNature, reaffirmed its readers that they have huge costs that necessitate the price they charge. This time, the publisher repeated their testimony from 2004 that “they have high internal costs” that amount to €10,000-30,000 per published article:

Springer Nature estimates that it costs, on average, €10,000–30,000 to publish an article in one of its Nature-branded journals

However, in their 2004 testimony, where they state figures in a similar range, they explain how they arrive at these numbers:

The £30,000 figure was arrived at simply by dividing the annual income of Nature (£30 million) by the number of research papers published (1,000).

This means that what the publishers are referring to isn’t their costs for publishing at all, it is the price that they charge the public for all of their services.

It is well established that the cost of making an article public with all the bells and whistles that come with an academic article is between US$/€200-500. This is the item one would reasonably call “publication costs”. Because they are so low, this item cannot be the main reason for the price of a typical Nature branded article. SpringerNature performs additional services, some of which are somewhat related to the publication process, other not so much.

For instance, journals such as Nature or Science reject about 92% of all submitted articles. Someone needs to read and reject all of these articles. Such “selectivity” is explicitly mentioned as a reason for the high prices.
It is important to keep in mind that this expensive selectivity fails to accomplish any increase in quality and is thus completely ineffective. The entire practice is thus very reminiscent of how potatoes were introduced in France. Nevertheless, the salaries of the employees who reject all these manuscripts are a cost item, effective or not. As it only concerns all the articles not published, it sounds rather absurd to lump this item in with “publication costs”, even though it is sort of related (Non-publication costs? Rejection costs?).

Another cost item is the paywall used to prevent non-subscribers from accessing the articles. Such paywalls can be very expensive, as, for instance, the New York Times is reported to have spent anywhere between US$25-55M for their paywall. Running a paywall to prevent unauthorized users from reading the articles is another cost item. This one, I would argue, is even less related to publishing.

Finally, there are cost items that are completely and rather uncontroversially unrelated to publishing, such as salaries for management, executives or government relations, as well as other costs such as journalism and news services, the latter explicitly mentioned in the recent article.

All of these cost items together make up the ~€10,000-30,000 that are currently being paid for an article in the SpringerNature stable and there are no reasons to doubt this price tag. Importantly, peer-review is not a cost item as the reviewers are paid via their academic salaries and not by the publisher. Authors are also not paid by the publisher, so this is also not a cost item. The person organizing the peer-review is usually one of the persons rejecting all those other manuscripts, so 92% of their salary are already covered by the rejection costs.

If the scholarly community accepts this price as reasonable, it needs to be prepared to explain to the tax-payer, why it is justified to use public funds to pay a private company such as SpringerNature less than ~€500 to publish a scholarly article in one of their journals and then an additional ~€29,500 for cost items such as ineffectively rejecting articles, making sure the ‘published’ articles remain difficult to access for most ordinary tax payers and the salaries of the company’s executives and lobbyists.

Like this:

Like Loading...
Posted on March 1, 2019 at 23:15 4 Comments
Jan17

Providing recommendations for Plan S implementation

In: science politics • Tags: cOAlition S, infrastructure, Plan I, Plan S

Since cOAlition S is asking for recommendations from the community for the implementation of their Plan S, I have also chipped in. In their feedback form, they ask two questions, to which I have answered with the replies below. With more than 700 such recommendations already posted, I am not deluding myself that anybody is going to read mine, so, for the record, here are my answers (with links added that I haven’t added in the form):

Is there anything unclear or are there any issues that have not been addressed by the guidance document?

The document is very clear and I support the principles behind it. The only major issue left unaddressed is the real threat of universal APC-based OA as a potential outcome. This unintended consequence is particularly pernicious, because it would merely change the accessibility of the literature (which currently is not even a major issue, hence the many Big Deal cancellations world-wide), leaving all other factors untouched. A consequence of universal APC-OA is that monetary inequity would be added to a scholarly infrastructure that is already rife with replication issues, other inequities and a dearth of digital functionalities. Moreover, the available evidence suggests that authors’ publishing strategy takes prestige and other factors more into account than cost, explaining the observation of already rising APCs. A price cap is de facto unenforceable, as authors pay any price above the cap, if they deem the cost worth the benefit. Here in Germany, it has become routine in the last decade, to pay any APC above the 2000€ cap imposed by the DFG from other sources. Hence, APCs have risen also in Germany unimpeded in the last ten years. A switch away from journal-based evaluations as intended by DORA also would lead to a change in authors’ publication strategy only after hardly any evaluations were conducted by journal rank any more, a time point decades in the future, given the current ubiquitous use of journal rank, despite decades of arguing against the practice. Thus, the currently available evidence suggests that a switch to universal APC-based OA, all else remaining equal, would likely lead to the unintended consequence of massively deteriorating the current status quo, in particular at the expense of the most vulnerable scholars and to the benefit of the already successful players. Therefore, rather than pushing access to only the literature (not a major problem any more) at all costs, universal APC-based OA needs to be avoided at all costs.

A minor issue is that Plan S does not address any other research output other than text-based narratives. Why is, e.g., research data only mentioned in passing and code/software even explicitly referred to with “external” repositories? Data and code are not second-class research objects.

Are there other mechanisms or requirements funders should consider to foster full and immediate Open Access of research outputs?

Individual mandates prior to Plan S (e.g., Liège, NIH, etc.) have proven to be effective. Especially when leveraged across large numbers of researchers they can have a noticeable impact on the accessibility of research publications. Widespread adoption of these policy instruments is also a clear sign of a broader consensus about what good, modern scholarship entails. However, so far, these mandates have not only failed to cover research outputs other than scholarly publications, some of them have also proven difficult to enforce or contained incentives for APC-based OA (see above). A small change to routine proceedings at most funding agencies today could provide a solution to these problems, prevent unintended consequences and complement Plan S. In support of Plan S, this small change has been called “Plan I” (for infrastructure). The routine proceedings, carried out by most funding agencies today, that would need amending or expanding are the infrastructure requirements the agencies place on the recipient institutions. Specific infrastructure requirements often are in place and enforced for, e.g., applications concerning particular (mostly expensive) equipment. General infrastructure requirements (e.g., data repositories, long-time archiving, etc.) are often in place for all grant applications, but more rarely enforced. Finally, most funding agencies already only consider applications from accredited institutions, which have passed some basic level of infrastructure scrutiny. The amendment or expansion that would have to take place merely expands on the enforcement of the infrastructure requirements to all applications and would need to be specific with regard to the type of infrastructure required for all research outputs, i.e., narratives (often text), data and code/software. Thus, Plan I entails to require institutions to provide grant recipients with the infrastructure for their grant recipients to be able to provide full and immediate Open Access of all of their research outputs (and hence comply with the Plan S principles and not just the implementation).

Here only an abbreviated list of Plan I advantages:

  • (publisher) services become substitutable
  • permanently low costs due to actual competition
  • no author facing charges
  • desired journal functionalities can be copied
  • if subscription funds are used for implementation, the demise of journals will accelerate journal-independent evaluations
  • cost-neutral solutions for data/code
  • no individual mandates that may violate sense of academic freedom required
  • technically easy implementation of modern digital properties to all research objects
  • modern sort, filter and discovery tools replace 17th century editorial/journal system
  • implementation of social technology that serves the scholarly community
  • sustainable long-term archiving that becomes catastrophe-proof with distributed technology
  • permanent, legal, public access to all research objects, with licensing under the control of the scholarly community.

Like this:

Like Loading...
Posted on January 17, 2019 at 09:36 Comments Off on Providing recommendations for Plan S implementation
Nov28

Maybe try another kind of mandate?

In: science politics • Tags: funders, infrastructure, mandates, open access, open science

Over the last ten years, scientific funding agencies across the globe have implemented policies which force their grant recipients to behave in a compliant way. For instance, the NIH OA policy mandates that research articles describing research they funded must be available via PubMedCentral within 12 months of publication. Other funders and also some institutions have implemented various policies with similar mandates.

In principle, such mandates are great not only because they demonstrate the intention of the mandating organization to put the interest of the public over the interest of authors and publishers. They also can be quite effective, to some extent, as the NIH mandate or the one from the University of Liège.

At the same time, such individual mandates are suboptimal for a variety of reasons, e.g.:

  1. In general, mandates are evidence that the system is not working as intended. After all, mandates intend to force people to behave in a way they otherwise would not behave. Mandates are thus no more than stop-gap measures for a badly designed system, instead of measures designed to eliminate the underlying systemic reasons for the undesired behavior.
  2. Funder mandates also seem to be designed to counter-act unintended consequences of competitive grant awards: competitive behavior. To be awarded research grants, what counts are publications, both many and in the right journals. So researchers will make sure no competitor gets any inside information too early and will try to close off as much of their research for as long as possible, including text, data and code. Mandates are designed to counter-act this competitive behavior, which means that on the one hand, funders incentivize one behavior and on the other punish it with a mandate. This is not what one would call clever design.
  3. Depending on the range of the behaviors intended to control, mandates are also notoriously difficult and tedious to monitor and enforce. For instance, if the mandate concerns depositing a copy of a publication in a repository, manual checks would have to be performed for each grant recipient. This is the reason the NIH have introduced automatic deposition in PMC. If re-use licenses are mandated, they also need to be tested for compliance. If only certain types of journals qualify for compliance, the 30k journals need to be vetted – or at least those where grant recipients have published. Caps on article processing charges (APCs) are essentially impossible to enforce, as no funder has jurisdiction over what private companies can ask for their products, nor the possibility to legally monitor the bank accounts of grant recipients for possible payments above mandated spending caps. Here in Germany, our funder, the DFG has had an APC cap in place for more than 10 years now and grant recipients simply pay any amount exceeding the cap from other sources.
  4. In countries such as Germany, where academic freedom is written into the constitution, such individual mandates are considered an infringement on this basic right. There currently is a law suit in Germany, brought by several law professors against their university for mandating a deposit of a copy of all articles in the university’s repository. In such countries, the mandate solution is highly likely to fail.
  5. Mandates, as the name implies, is a form of coercion to force people to behave in ways they would not otherwise behave. Besides the bureaucratic efforts needed to monitor and enforce compliance, mandates are bound to be met with resistance by those coerced by the mandate to perform additional work that takes time away from work seen as more pressing or important. There may thus be resistance both to the implementation and the enforcement of mandates that appear to be too coercive, reducing the effectiveness of the mandates.

For about the same time as the individual mandates, if not for longer, funders have also provided guidelines for the kind of infrastructure the institutions should provide grant recipients with. In contrast to individual mandates, these guidelines have not been enforced at all. For instance, the DFG endorses the European Charter for Access to Research Infrastructures and suggests (in more than just one document) that institutions provide DFG grant recipients with research infrastructure that includes, e.g., data repositories for access and long-term archiving. To my knowledge, such repositories are far from standard at German institutions. In addition, the DFG is part of an ongoing, nation-wide initiative to strengthen digital infrastructures for text, data and code. As an example, within this initiative, we have created guidelines for how research institutions should support the creation and use of scientific code and software. However, to this day, there is no mechanism in place to certify compliance of the funded institutions with these documents.

In the light of these aspects, would it not be wise to enforce these guidelines to an extent that using these research infrastructures would save researchers effort and make them compliant with the individual mandates at the same time? In other words, could the funders not save a lot of time and energy by enforcing institutions to provide research infrastructure that enables their grant recipients to effortlessly become compliant with individual mandates? In fact, such institutional ‘mandates’ would make the desired behavior also the most time and effort saving behavior, perhaps making individual mandates redundant?

Instead of monitoring individual grant recipients or journals or articles, funders would only have to implement, e.g., a certification procedure. Only applications from certified institutions would qualify for research grants. Such strict requirements are rather commonplace as, e.g., in many countries only accredited institutions qualify. Moreover, on top of such general requirements, there can be very specific infrastructure requirements for certain projects, such as a core facility for certain high-throughput experiments. In this case, the specifications can even extend to certain research and technical staff and whether or not the core facility needs permanent staffing or temporary positions. Thus, it seems, such a certification procedure would be a rather small step for funders already set up to monitor institutions for their infrastructure capabilities.

If groups of funders, such as cOAlition S, coordinated their technical requirements as they have been coordinating their individual mandates, the resulting infrastructure requirements would include FAIR principles, which would lead to a decentralized, interoperable infrastructure. under the governance of the scientific community. As this infrastructure is intended to replace current subscription publishing with a platform that integrates our text-based narratives with our data and code, it would be straightforward for the funders to suggest that an obvious source of funds for the required infrastructure would be subscriptions. As most scholarly articles are available without subscriptions anyway and implementing the infrastructure is much cheaper, on average, than subscriptions, the implementation should be possible without disruption and with considerable cost reductions for the institutions. If an institution considers their library to be the traditional place where the output of scholars is curated, made accessible and archived, then there would not even have to be a redirection of funds from library subscriptions to different infrastructure units – the money would stay within the libraries. But of course, institutions would in principle remain free to source the funds any way they see fit.

Libraries themselves would not only see a massive upgrade as they would now be one of the most central infrastructure units within each institute, they would also rid themselves of the loathsome negotiations with the parasitic publishers, a task, librarians tell me, which no librarian loves. Through their media expertise and their experience with direct user contact libraries would also be ideally placed to handle the implementation of the infrastructure and training users.

Faculty would then enjoy never to have to worry about their data or their code ever again, as their institutions would now have an infrastructure that automatically takes care of these outputs. Inasmuch as institutions were to cancel subscriptions, there also would be no free/paid alternative to publish than the infrastructure provided by the institutions, as the cash-strapped publishers would have to close down their journals. Moreover, the integration of authoring systems with scientific data and code makes drafting manuscripts much easier and publication/submission is just a single click, such that any faculty who values their time will use this system simply because it is superior to the antiquated way we publish today. Faculty as readers will also use this system as it comes with a modern, customizable sort, filter and discovery system, vastly surpassing any filtering the ancient journals could ever accomplish.

Taken together, such a certification process would only be a small step for funders already inclined to push harder to make the research they funded accessible, save institutions a lot of money every year, be welcomed by libraries and a time saver for faculty, who would not have to be forced to use this conveniently invisible infrastructure.

Open standards underlying the infrastructure ensure a lively market of service providers, as the standards make the services truly substitutable: if an institution is not satisfied with the service of company A, it can choose company B for the next contract, ensuring sufficient competition to keep prices down permanently. For this reason, objections to such a certification process can only come from one group of stakeholders: the legacy publishers who, faced with actual competition, will not be able to enjoy their huge profit margins any longer, while all other stakeholders enjoy their much improved situation all around.

 

Like this:

Like Loading...
Posted on November 28, 2018 at 11:00 4 Comments
Nov02

Dopamine in optogenetic self-stimulation and CRISPR editing of FoxP

In: own data • Tags: dopamine, FoxP, optogenetics, poster, SfN

This year we have two posters at the SfN meeting in sunny San Diego, Ca. The first poster is on Sunday morning, Nov. 4, poster number 152.09, board QQ7, entitled “Neurobiological mechanisms of spontaneous behavior and operant feedback in Drosophila“. For this poster, Christian Rohrsen used three different optogenetic self-stimulation experiments to find out which dopaminergic neurons mediate reward or punishment, respectively.

The second poster is on Monday afternoon, Nov 5, poster number 407.23, board UU1, entitled “CRISPR/Cas9-based genome editing of the FoxP locus in Drosophila“. For this poster, Ottavia Palazzo created several fly lines in which the FoxP gene locus was modified by, for instance, inserting a GAL4 reporter in place of important parts of the gene, creating loss-of-function alleles. Ottavia has created a range of useful constitutional and conditional manipulations and the first characterizations of the first constitutional lines are presented on this poster. Postdoc Anders Eriksson and intern Klara Krmpotic performed some of the behavioral tests and the monoclonal antibodies are being generated in the lab of Diana Pauly with the help of her graduate student Nicole Schäfer. Bachelor student Julia Dobbert helped with some of the molecular work and postdoc Matthias Raß taught and supervised all of Ottavia’s and Julia’s molecular work.

Like this:

Like Loading...
Posted on November 2, 2018 at 16:39 Comments Off on Dopamine in optogenetic self-stimulation and CRISPR editing of FoxP
Oct31

Automated Linked Open Data Publishing

In: own data • Tags: open data, open science, poster

On the occasion of the first “BigDataDay” at our university, I have summarized on the below poster our two main efforts to automate the publication of our tiny raw data.

On the left is our project automating Buridan data deposition at FigShare using the ROpenSci plugin and the consequence of just sending the links to the data and the evaluation code to a publisher, instead of pixel-based figures, when submitting a paper. Most of this work was done by postdoc Julien Colomb several years ago, when I was still in Berlin.

On the right is our ongoing project of automating flight-simulator data deposition with our library. We designed a YML/XML meta-data format loosely based on the Frictionless Data standard. Our evaluation script reads a YAML file that contains the experimental design (i.e., which raw data file belongs into which experimental group) as well as formalized commands for the kinds of statistical tests and graphs to be generated. From this information, each experiment (i.e., XML file) is evaluated and a quality control HTML document is written that contains numerous aspects of the raw data to ensure the quality of the data in each experiment. The same information from the YAML file is used to compute an evaluation HTML document with group-wise evaluations. All the raw data and evaluation files are linked with each other and the XML files link not only to the repository with the evaluation script, but also to the repository with the software that collected the data and the data model explaining the variables in the raw data files. Ideally by dragging and dropping figures with statistics into a manuscript, published scholarly articles would link to all of the files generated for the publication. A client-side Python script is called upon user login to compare the local project folder and compare it with the folder on the library’s publication server for synchronization.

Like this:

Like Loading...
Posted on October 31, 2018 at 14:18 Comments Off on Automated Linked Open Data Publishing
Sep19

Does academic freedom entail exemption from spending rules?

In: science politics • Tags: academic freedom, infrastructure, publishing

The recent publication of the “Ten Principles of Plan S” has sparked numerous discussions among which one of several recurring themes was academic freedom. The cause for these discussions is the insistence of the funders supporting Plan S that their grant recipients only publish in certain venues under certain liberal licensing schemes.

Germany is likely among the countries with the strongest protection of academic freedom, as article five of our constitution explicitly guarantees academic freedom. Historically, this has always included free choice of publishing venue. As modern internet technology keeps encroaching on academic traditions, there is now a lawsuit pending at the constitutional court of Germany in which way open access mandates, requiring scholars to deposit a copy of their published article in an institutional repository, violates academic freedom. The lawsuit was started by several law professors with support of the DHV (main organization representing academic interests in Germany) and the publishers’ organization “Börsenverein”. It will be interesting to see if the traditional and widespread mandates on publishing in certain venues (by sanction of unemployment) will be brought before the court.

I have asked the law professors behind the lawsuit on their opinion about the current infringements on academic freedom by employers, but did not receive a reply. An independent question to a practicing lawyer specialized in constitutional law was answered with the current requirements being in principle equivalent infringements on academic freedom, as open access mandates, if they even are considered infringements. However, the lawyer advised to wait until the current ruling has been published and that a suit could only be brought by affected parties. In Switzerland, a law professor recently confirmed the conclusion of the German lawyer that both funder mandates and employer requirements can be considered equivalent. In this case, his conclusion was that neither infringe on academic freedom.

Thus, even in Germany, a country with arguably one of the strongest, constitutional protections of academic freedom, it is far from certain if any/all requirements for publication venue may constitute an infringement on these constitutional rights. Ongoing legal proceedings will help clarify this question. As a non-lawyer I would tend to argue that in case open access mandates are considered as violations of academic freedom, also the requirements to publish in certain journals must fall. Conversely, if current practice is no infringement, neither are open access mandates.

For the main argument of this post, however, let’s assume both funder and employer mandates were considered an infringement of academic freedom, i.e., the German constitutional court bans all and every policy to push academics into publishing in certain places, whether that be funder or employer requirements. Would such a strong interpretation of academic freedom automatically entail that the tax-payer has to fund every possible publication venue an academic might choose?

Let’s imagine some amusing hypothetical scenarios. The scenarios are not meant to be realistic, but exemplify the difficulty to negotiate individual freedoms with responsible spending of public funds:

  1. A group of ecologists makes an exciting discovery about local wildlife and they decide to exercise their academic freedom and publish their discovery by buying billboard space across the region, to alert the general public of the precious finding.
  2. A group of biomedical researchers finds a novel drug target for a deadly disease and they decide to exercise their academic freedom and publish their discovery by publishing it in double page advertisements in major newspapers to make sure every drug maker in the world receives enough information to develop the cure.
  3. A group of social psychologists discovers that cluttered environments promote racist language and theft. They decide to exercise their academic freedom and publish their discovery in a prestigious subscription journal. The collective price of the subscriptions to this journal averages to about 100 times the technical publishing costs of the article. On top of the exorbitant price tag, the journal paywall and policies limit both access to the research and the data upon which the publication rests.
  4. A group of geneticists discovers a new mechanism of inheritance and they decide to publish their results in a prestigious journal. The journal recently flipped from the subscription model to author-pays, ‘gold’ open access. Because their chosen journal is highly selective, which is the basis of its prestige, the author-side fee is 200 times that of the technical costs of publishing the article.

Obviously, all four of these scenarios are ‘batshit crazy’ and nobody in their right mind would even try to defend any of these against tax-payer (or general accounting office) scrutiny or try to align them with university spending rules. And yet, scenario number three is current reality and number four may soon be reality, if Plan S and other such funder policies that support ‘gold’ open access were to become standard practice (see here for more reasons why this aspect could lead to severe unintended consequences).

Arguing from a strong point of academic freedom, would one, therefore, to be consistent, require that all scenarios should be funded by the tax-payer, or none? If all should be funded, where should the funds come from to pay even the most outrageously extravagant venues academics might choose? If none ought to be funded, what are the rules based on which these decisions are to be made?

Squaring constitutional rights with public spending is not an easy task. Since I am no legal scholar by any stretch of the imagination, I would tend to argue with the common notion that my freedom to swing my arms ends at your nose (a saying I learned from Timothy Gowers). Publicly funded institutions commonly have to obey strict spending rules. This has a long tradition as this document from 1942 shows:

The awarding of contracts by municipal and other public corporations is of vital importance to all of us, as citizens and taxpayers. Careless and inefficient standards and procedures for awarding these important community commitments have increased unnecessarily the tax burdens of the public. To secure a standard by which the awarding of public contracts can be made efficiently and economically, and with fairness to both the community and the bidders, the constitutions of some states, and the statutes regulating municipal and public corporations provide for the award of public contracts to the lowest responsible bidder.

Who would argue that academic freedom should exempt academics from such spending rules? On the contrary, shouldn’t these spending rules require public institutions to find the most cost-effective way to fulfill their mission, regardless of what venue academics might prefer to publish in? This latter consequence would entail that buying subscription access to publicly funded scholarly works does not qualify as a cost worth spending public money on. How can institutions escape such violation of their spending rules while simultaneously allowing their faculty to exercise their academic freedom?

Here, I suggest that the current, rational, modern resolution of the conflict between academic freedom and spending rules is to provide academics with a cost-effective publishing infrastructure and provide them with the freedom decide whether they want to use it or not. The infrastructure would be maintained by the institutions and either serviced by them or by bidding contractors, as any other infrastructure. Scholars have the choice to either use this infrastructure at no cost to them or find funds to choose any other venue. Given the current abysmal state of publishing functionality, together with the extinction of existing journals without subscription funding, a quite rapid shift in publishing culture would be expected.

As current subscription spending is roughly ten times of what is needed to keep publishing going at current levels, so one would not expect much of a disruption from obeying spending rules also in academic publishing. On the contrary, 90% of current funds would, if these spending levels were sustained, help improve upon current publishing user experience and help innovate to implement a modern infrastructure that services not only our text, but also our data and code.

Like this:

Like Loading...
Posted on September 19, 2018 at 14:08 3 Comments
  • Page 6 of 21
  • « First
  • «
  • 4
  • 5
  • 6
  • 7
  • 8
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,963 views)
  • Sci-Hub as necessary, effective civil disobedience (22,948 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,486 views)
  • Booming university administrations (12,907 views)
  • What should a modern scientific infrastructure look like? (11,459 views)
  • Edgewise
  • Embrace the uncertainty
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d