bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 168 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 88 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 196 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 502 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 747 downloads 0.00 KB
Download
Dec09

High APCs are a feature, not a bug

In: science politics • Tags: APCs, nature, open access, publishers

There has been some outrage at the announcement that Nature is following through with their 2004 declaration of charging ~10k ($/€) in article processing charges (APCs). However, not only have these charges been 16 years in the making but the original declaration was made not on some obscure blog, but at a UK parliamentary inquiry. So nobody could rightfully claim that we couldn’t have seen this development coming from miles away.

In fact, already more than 10 years ago, such high APCs were very much welcomed, as people thought they could bring about change in scholarly publishing. Some examples, starting with Peter Suber in 2009:

As soon as we shift costs from the reader side to the author side, then, we create market pressure to keep them low enough to attract rather than deter authors.  […] precisely because high prices in an OA world would exclude authors, and not merely readers, there is a natural, market-based check on excessive prices.  BTW, I’m not saying that these market forces will keep prices within reach of all authors

Cameron Neylon, 2010:

I have heard figures of around £25,000 given as the level of author charge that would be required to sustain Cell, Nature, or Science as Open Access APC supported journals. This is usually followed by a statement to the effect “so they can’t possibly go OA because authors would never pay that much”.
[…]
If authors were forced to make a choice between the cost of publishing in these top journals versus putting that money back into their research they would choose the latter. If the customer actually had to make the choice to pay the true costs of publishing in these journals, they wouldn’t.
[…]
Subscription charges as a business model have allowed an appallingly wasteful situation to continue unchecked because authors can pretend that there is no difference in cost to where they publish, they accept that premium offerings are value for money because they don’t have to pay for them. Make them make the choice between publishing in a “top” journal vs a “quality” journal and getting another few months of postdoc time and the equation changes radically.
[…]
We need a market where the true costs are a factor in the choices of where, or indeed whether, to formally publish scholarly work.

Mike Taylor, 2012:

the bottom line is that paying at publication time is a sensible approach. It gives us what we want (freedom to use research), and provides publishers with a realistic revenue stream that, unlike subscriptions, is subject to market forces.

Stephen Pinfield, 2013:

But Gold OA is not like [subscription]. It has the potential to reintroduce genuine competition into the journal market with authors sensitive to price making choices about where they place their articles. If journals put APCs up, authors can go elsewhere and the adjustments can happen quickly.
[…]
Gold OA, on the other hand should make price changes clearer –and customers will be able to respond accordingly.

Danny Kingsley, 2014:

However the increase in the payment of APCs has awoken many researchers’ awareness of the costs of publication. An author’s reaction of surprise to the request for a US$3000 [approx. AU$3180] APC when they are contacted by a publisher provides an opportunity for the library to discuss the costs associated with publication. There is an argument that as payment for publication at an article level becomes more prevalent, it gives the researcher an opportunity to determine value for money and in some arguments this means that scholarly publishing would be a more functional market

Convinced that authors would be price sensitive, it was even mentioned as a problem, if someone would pay the APCs for the authors. Kingsley:

This is then one of the disadvantages of having centralised management of APCs – it once again quarantines the researcher from the cost of publication

Pinfield:

But there is a danger with many of the processes now being established by universities to pay for APCs on behalf of authors. These systems, which will allow payment to be made centrally often with block pre-payments to publishers, will certainly save the time of authors and therefore ought to be pursued, but they do run the risk of once again separating researchers from the realities of price in a way that could recreate some of the systemic failures of the subscription market.They need to be handled with caution.

Hindsight is always 2020. Little did people know back then that authors, when faced with a choice, would actually tend to pay the most expensive APCs they could afford, because such is the nature (pardon) of a prestige market. It is as such not surprising to us now, that rich institutions hail the 10k price tag for Nature APCs as “very attractive” and “not a problem”.

The above were just the few examples I could readily find with the help of Richard Poynder, Danny Kingsley, Cameron Neylon and Mike Taylor. The sentiments expressed there were pretty much mainstream and discussed widely even before 2009 (and you can sense that in the referenced sources). Thus, the idea that high APCs are a feature and not a bug, thought to bring competition into a market was a driving force for gold OA for a long time. Even today, you can still find people claiming that “If we switch from subscription publishing to pay-to-publish open access, [publisher profit] margins are likely to drop to 10%-20%.” (Lenny Teytelman as late as 2019). We now see that this view was spectacularly wrong. People will pay no matter what when their livelihoods are at stake, even if it costs them their last shirt (German saying). We ought to think hard what the consequences now have to be, of having been so catastrophically wrong.

This is how Nature‘s high APCs came about. Many thought it was a good thing and kept pushing for them until Nature gave in.

Update: One of the quoted authors, Mike Taylor, just confirmed that he still thinks 10k APCs are good and that the outrage people feel at their livelihoods being put in jeopardy is what will drive change:

Like this:

Like Loading...
Posted on December 9, 2020 at 10:52 5 Comments
Nov30

Are Nature’s APCs ‘outrageous’ or ‘very attractive’?

In: science politics • Tags: APCs, nature, open access, publishers

Last week, there was a lot of outrage at the announcement of Nature’s new pricing options for their open access articles. People took to twitter to voice their, ahem, concern. Some examples:

There are many more that all express their outrage at the gall of Nature to charge their authors these sums,. even Forbes interviewed some of them. At the same time, the people who have been paying publishers these sums for decades find that the ~10k per paper is actually “very attractive” and “not a problem”.

So which is it? Are Nature’s APCs ‘outrageous’ or are the prices ‘very attractive and not a problem’?

What is clear is that these charges are definitely not a surprise. Already back in 2004, in a Parliamentary inquiry in the UK, Nature provided testimony that they would have to charge 10-30k for a Nature paper, given their revenues at the time (i.e., their subscription and advertising income). While back then, most people scoffed at the numbers and expected that no author would ever pay such fees, Nature got to work and invented a whole host of ‘lesser’ journals (i.e., Nature XYC as in “Nature Physics”, Nature Genetics” and so on), which would serve several purposes at once: They increase revenue. As hand-me-down journals they keep desperate authors attached to the Nature brand. As less selective journals, they would bring down average costs per article for the brand total, when they would need to go open access.

So this year, after open access advocates, funders and the now also pandemic-stricken public had kept demanding open access for 16 years after they had been warned, Nature was finally ready to deliver. Due to the dilution of their costs by way of the ‘lesser’ journals, they managed to keep their APCs close to their lower bounds of 2004, despite 16 years of inflation. Given that libraries have been paying these kinds of funds for Nature journals for decades, this price tag then really is a bargain, all things considered.

Given this analysis, all the online outrage strikes me as unwarranted. While I of course agree that it should have never come so far (we ought to have realized where this is headed already in 2004!), crying foul now comes about 16 years too late. We have had plenty of time to prepare for this, we have had more than enough time to change course or think of alternative ways to the legacy publishers. And yet, nearly everybody kept pushing in the same direction anyway, when we could have known in 2004 that this was not going to end well. The people warning of such not quite so unintended consequences were few and far between.

Having only gotten interested in these topics around 2007 or so myself, it took me until 2012 to understand that this kind of APC-OA was not sustainable and, indeed, would stand to make everything worse just in order to worship the sacred open access cow.

If you were even later to the party and are outraged now, direct your outrage not to Nature, who are only following external pressures, but to those who exert said pressures, such as open access advocates pushing for APC-OA and funders mandating authors to publish in such journals.

Like this:

Like Loading...
Posted on November 30, 2020 at 10:30 5 Comments
Oct26

Is the SNSI the new PRISM?

In: science politics • Tags: PRISM, publishers, sci-hub, SNSI

Just before Christmas 2019, the Washington Post reported, based on “people familiar with the matter”, that the US Justice Department were investigating the Sci-Hub founder Alexandra Elbakyan for potentially “working with Russian intelligence to steal U.S. military secrets from defense contractors”. Besides such a highly unusual connection, the article also reiterated unsubstantiated (but mainly circulated by publishers) allegations that access to scholarly journals was obtained via her ‘hacking’ skills. The article also cited a “former senior U.S. intelligence official” that he believed Elbakyan was working with the Russian foreign intelligence service GRU. Apparently, the investigation had been ongoing since 2014, but now, in 2020, there is still no publicly available evidence as to what this investigation has been able to find.

And yet, despite no evidence, on the very next day after the Washington Post story, Elsevier was all too happy to find their oft-repeated but little-believed claims of Sci-Hub being dangerous vindicated and exclaimed “This represents a threat to academic institutions”. Elsevier, after winning a lawsuit that failed to materialize any of the millions it sought, finally had external support to bolster their claims that Sci-Hub was not only a threat to their bottom line, but to research integrity!

Less than two months later, Nick Fowler, chief academic officer at Elsevier, announced the new Scholarly Networks Security Initiative (SNSI), under the title “Working together to protect from cyber attacks”. Fowler was assisted by SpringerNature chief publishing officer Steven Inchcoombe. Both introduced themselves as co-chairs of SNSI. One aspect mentioned in the article was that “Awareness of the damage Sci-Hub is inflicting on institutions and academia needs to be increased.” The idea being that publishers and institutional libraries work together to fight a common enemy.

This public relations aspect of the SNSI is what needs to receive special attention. On the face of it, Sci-Hub is an enabling technology: before Sci-Hub, scholars needed subscriptions to access the scholarly literature; now, subscriptions have become optional. In many countries, this has led to new initiatives and consortia finally toughening their stance in library-publisher negotiations. What in the previous three decades was a walk in the park, followed by ever climbing profit margins now stands to be a tough negotiation. Sci-Hub thus has had opposite effects on libraries and publishers: while libraries need not fear lapses of access as much as previously, allowing them to be bolder in their negotiations, publishers wonder why anybody should pay for their offerings at all, if their customers can have all the scholarly content of the world for free.

When it turned out that the lawsuits against Elbakyan would neither lead to any damages being paid nor have a deterring effect on libraries or their patrons, and when initiatives asking for a tougher stance against publishers garnered more and more support, publishers devised a new strategy. They would try and paint Sci-Hub not only as a threat for them, but also for the libraries. Rumors started spreading unsubstantiated claims that Sci-Hub had obtained their login-credentials, with which they were populating their databases, not by donations, but by phishing attacks. As there was no evidence, there was little uptake or discussion. One may assume that with Sci-Hub being around since 2011, noticeable consequences on library behavior starting around 2012/13, by 2017/18, when the phishing rumors failed to gain traction, publishers must have been fairly frustrated that their usual power over academics seemed on the decline for the first time in decades.

Perhaps the feeling of frustration was similar around 2005, when the Open Access movement, invigorated by the Budapest (2001) and Berlin (2003) declarations, continued to garner steam. Also then, the publishers’ attempts at painting Open Access to scholarly works as a threat to research integrity failed to rouse support and slow the momentum of the OA movement. Just before the launch of the NIH OA mandate in the US, the American Association of Publishers (AAP) decided they needed something that would really get their message across and in 2006 started the Partnership for Research Integrity in Science and Medicine (PRISM) Coalition*. They hired “the pit bull of public relations”, Eric Dezenhall, to create a smear campaign hat would strive to equate public access with junk science. In particular with regard to OA mandates, part of the plan was for publishers to partner with anti-science organizations, which shared their anti-government sentiment. The aim was to bring institutions on board to save research integrity together with the publishers.

Thus, in both instances, public access to scholarly works (whether via OA or Sci-Hub) posed only a threat to publishers and in both instances, the publishers sought to paint themselves as chiefly concerned not about their bottom line but about “research integrity”. Compare the statement on the PRISM website:

The Partnership for Research Integrity in Science and Medicine (PRISM) was formed to advocate for policies that ensure the quality, integrity, and economic viability of peer-reviewed journals.

with statements on the SNSI site:

Scholarly Networks Security Initiative (SNSI) brings together publishers and institutions to solve cyber-challenges threatening the integrity of the scientific record, scholarly systems and the safety of personal data.

This past week, these public relations efforts were dialed up a notch or ten to a whole new level. At an SNSI webinar entitled „Cybersecurity Landscape – Protecting the Scholarly Infrastructure“, hosted by two Elsevier employees, one of the presenters suggested to „develop or subsidize a low cost proxy or a plug-in to existing proxies“ in order to collect user data. That user data, it was explained, could be analyzed with an “Analysis Engine” to track biometric data (e.g., typing speed) or suspicious behavior (e.g., a pharmacology student being suspiciously interested in astrophysics). The angle towards Sci-Hub was confirmed by the next speaker, an Ex-FBI agent and security analyst.

Considering the track record of academic publishers, this reeks strongly of PR attempts to ‘soften the target’, i.e., to make installing publisher spyware on university servers sound less outrageous than it actually is. After the PRISM debacle, the publishers now seem to have learned from their PR mistakes. This time, there is no ‘pitbull’ around. This time, there is only a strange article in a major newspaper, a shady institute where it appears hard to find out who founded it, who is running it and who funds it.

SNSI is an apparent PR project aimed at compromising, not strengthening, network security at research institutions. However, unlike with PRISM, this time the PR effort may pay off.


* It has to be noted that one of the AAP publishers in the PRISM Coalition was Elsevier, who had so much disdain for research integrity that they had published a nine fake journals from 2000 until 2005. In other words, in one year, they stop publishing their fake journals, in the next, they join a PR campaign in which research integrity is the central tenet.

Like this:

Like Loading...
Posted on October 26, 2020 at 16:58 10 Comments
Oct13

Come and do research with us!

In: news • Tags: position, postdoc

Trial and error is a successful problem-solving strategy not only in humans but throughout evolution. How do nervous systems generate novel, creative trials and how are errors incorporated into already existing experiences in order to improve future trials? We use a variety of transgenic tools, mathematical analyses, connectomics and behavioral physiology to understand the neurobiology of spontaneous behavior, learning and adaptive behavioral choice.

We offer a fully-funded postdoctoral position to participate in this research field. The successful candidate not only gets to choose their project between mathematical analyses of spontaneous behavior in transgenic animals or connectomics of interacting learning systems, but they also get to practice open science in an international research team in a brand new building with state-of-the-art infrastructure. To top it off, the duration of the position (underlying German time limits) allows the successful applicant to learn how to write their own grant applications, establish their own research group and, if desired, obtain a German ‘habilitation’ degree.

Research questions

Our research has discovered evolutionary conserved mechanisms for the generation of spontaneous behavior and feedback-based learning mechanisms. Besides trial and error, spontaneous behavior provides organisms with adaptive unpredictability, crucial in competitive situations such as evolution. The first possible project builds on our work analyzing the spontaneous turning behavior of tethered Drosophila and will identify the neurons and their neurophysiological mechanisms mediating behavioral choice.

The second project will identify the connectivity of the mushroom-body output neurons mediating the hierarchical interaction between fact- and skill-learning that regulates habit formation.

Requirements

The successful candidate will have a PhD (or be close to completion) in a relevant field of neuroscience, biology, psychology or physics, coding experience in R, Python, Julia, or equivalent, good written/oral communications skills in English, and also, ideally, practical experience working with Drosophila.

Our lab

Our lab prioritizes inclusion and diversity to achieve excellence in research and to foster an intellectual climate that is welcoming and nurturing. We are based at the University of Regensburg, an equal opportunity employer with over 20,000 students and more than 1500 faculty, in Regensburg, Bavaria, Germany.

Please send your application with your CV and a short reference to one of our publications to my institutional address (bjoern.brembs@ur.de). Applications will be considered until the position is filled, but applications before November 1, 2020 will receive preferential treatment.

P.S.: I was told by a reader that I must add the following statement:

You should mention that Regensburg is an incredibly nice city with high quality of life! Affordable, safe, cultural, civil, great food, and close to other great cities like Prague and Munich.

Like this:

Like Loading...
Posted on October 13, 2020 at 15:46 Comments Off on Come and do research with us!
Oct05

How academic institutions neglect their duty

In: science politics • Tags: infrastructure, mandates, policies

Think, check, submit: who hasn’t heard of this mantra to help researchers navigate the jungle of commercial publishers? Who isn’t under obligation to publish in certain venues, be it because employers ask for a particular set of journals for hiring, tenure or promotion, or because of funders‘ open access mandates? Researchers today are stuck between a rock of confusing publishing options and a hard place of often mutually exclusive policy requirements and ethical considerations when all they want is to publish a scholarly article. Seasoned researchers may have developed heuristics from experience to cope with the complexity, but early-career researchers need guidance and support.

In addition to the constraints on publishing the text summaries of their work, researchers are also facing an increasingly complex ecosystem of domain-specific and domain general databases to make their research data FAIR – findable, accessible, interoperable and re-usable. In part, this is connected to the situation in article publishing, as some journals require data deposition upon publication. For a number of years now, funders have also begun to ask for data management plans upon submission of grant proposals and their good scientific practice guidelines require researchers to archive and make their data accessible for at least ten years. Many of these guidelines need to be ratified by academic institutions if they want to remain eligible for receiving funds from funding agencies. Researchers now face similarly complex constraints on their research data needs as on article publishing. Consequently, as with article publishing, courses, webinars and workshops are springing up to educate researchers on all the many different options and constraints they face in data management and sharing.

In many cases, data sharing is futile without providing at least some code or software to make the data accessible. In other cases, the code is the scholarship that is being published in the scholarly article. In all of these cases, code sharing is as mandatory as data or text publication. In addition to such practical necessities, there are also mandates and policies requiring archiving of all research works, including code. As in articles and data, research now also face the question of where to publish their code: in one of the commercial tools such as BitBucket or GitHub or in one of the many GitLab instances that are mushrooming everywhere now or in some other venue?

Imagine if there were a similar balkanization of providers and mutually exclusive policies for other services such as, say, email. Researchers would have to identify an email provider that is either compliant with all institutions and funders (unlikely), or use different providers and addresses for different institutions and funders. Imagine an analogously balkanized situation in professional email correspondence as the current messenger market with WhatsApp, GroupMe, Signal, Slack, Mattermost, RocketChat, etc. all being isolated and non-interoperable. Imagine institutions where researchers would have to dodge similar slings and arrows just to provide their laboratories with electricity, water or gas? Imagine institutions leaving their researchers alone to fend for their own HVAC, furniture, sewage or recyclables? How much research would the researchers at such institutions still be able to do?

There is a reason academic institutions are providing a basic infrastructure for housing, electricity, HVAC, water, etc. for their researchers. Academic institutions have a mission of research and teaching that is best accomplished by allowing its members to focus on the mission, rather than its corollaries. Today, what could be a more basic digital infrastructure than one that takes care of the primary research products – text, data and code? Clearly, such an infrastructure must be considered more basic and mission-driven than email. With this understanding it becomes obvious that the current wild-west situation for our primary research products constitutes a clear dereliction of duty by our academic institutions. Institutions need to provide their researchers with a technologically adequate, affordable infrastructure that a) automates the tasks around text, data and code sharing, b) ensures compliance with the various policies and c) protects the privacy of researchers and their human research subjects/patients. The implementation of such an infrastructure has been overdue for nearly 30 years now.

As the technology for such an infrastructure is available off the shelf and institutions are spending multiple amounts of what would be required on legacy publishers, there remain only social obstacles as to why academic institutions keep neglecting their researchers. Given that institutions have now failed for about 30 years to overcome these obstacles, it is straightforward to propose that mandates and policies be put in place to force institutions (and not researchers!) to change their ways and implement such a basic infrastructure.

Like this:

Like Loading...
Posted on October 5, 2020 at 09:40 Comments Off on How academic institutions neglect their duty
Sep25

Why do academic institutions seem stuck in 1995?

In: science politics • Tags: competition, infrastructure, neoliberalism

Until the late 1980s or early 1990s, academic institutions such as universities and research institutes were at the forefront of developing and implementing digital technology. After email they developed Gopher, TCP/IP, http, the NCSA Mosaic browser and experimented with Mbone.

Since then, at most academic institutions, infrastructure has moved past the support of email and browsers only at a glacial pace. Compared to the years and decades before the early 1990s, the last 30 years appear to be frozen in time, with virtually no modernization of our infrastructure beyond bandwidth.

Functionalities we take for granted outside of academia, such as automated data sharing, collaborative code development and authoring, social media, etc. – virtually none of it is supported by academic institutions on an analogous, broad international scale such as email or browsing.

As the technology is commercially available and more than enough money is still flowing into obsolete infrastructure such as journal subscriptions, the conclusion that it must be a social obstacle that prevents infrastructure modernization becomes inescapable.

I was asked on Twitter today, what this social obstacle might be and how it could be overcome:

Here is a short summary of my answers to these two questions:

The first is a difficult question and I only can offer hypotheses. A recent comment by Antonio Loprieno when we were on a panel of the German Wissenschaftsrat, seemed to confirm part of one hypothesis: he said, citing a recent example from a university in Germany, that today institutions seem to be more willing to invest in looking like they are performing better, rather than to actually perform better – show over substance. His example was that of a university hiring two FTEs to massage the data sent to ranking organizations, rather than to change the way the university is operating.

From this perspective, until the early 1990s, around which time universities were told to compete, universities cooperated and had the interest of their faculty and students in mind. This entailed that if there was a technology that would stand to improve how the university could fulfill its mission and money was available to implement it, it would be implemented. The internet was such a technology and according to a book entitled “How not to network a nation“, institutions had to cooperate to make this technology a reality. Apparently, academic institutions saw the potential, found the money and cooperated, or the efforts would have failed just as those in the Soviet Union at the time failed.

https://mitpress.mit.edu/sites/default/files/styles/large_book_cover/http/mitp-content-server.mit.edu%3A18180/books/covers/cover/%3Fcollid%3Dbooks_covers_0%26isbn%3D9780262034180%26type%3D.jpg?itok=VxeTgicM
https://mitpress.mit.edu/books/how-not-network-nation

Analogous to the failure of Soviet Russia to use competition between institutions to develop their own network, roughly since the fall of the iron curtain (ironically enough) we have been mimicking the failed Soviet approach with competing institutions focusing more on rankings rather than actually improving the way they pursue their mission.

As for the second question, of how to overcome this social obstacle, this is easier to answer and there are a number of options to choose from.

  1. Obviously, one could revert some of the fatal decisions made in the early 1990s (like getting rid of New Public Management and other similarly embarrassing inventions).
  2. However, #1 seems like the most difficult of options, so almost a decade ago now, I thought one only needed to convince libraries that they are ideally suited to implement the new infrastructure as they have both the expertise and, of course they control the money. After nearly a decade of interacting with both skeptical and enthusiastic libraries, I can now see why even the enthusiastic ones are hesitant: they have similarly good reasons not to act as my researcher colleagues.
  3. So when I saw how deans of several departments here at my institution were scrambling to find positions and money for infrastructure requirements of the DFG, I had the idea for Plan I (for infrastructure): these infrastructure mandates that are already in place for some aspects of funding, need to be expanded for all aspects. Institutions dependent on research overhead will do anything to meet the demands of the funders, as I could experience first hand. I’ve talked to funders like the DFG, NSF, NIH and ERC (Bourguignon) and none of them saw any obstacles, but I have yet to see adoption or even widespread discussion.

So given that last experience and those of the last 13 years or so in which I have been involved in these topics, I expect funders to soon come up with a similar reason as my researcher colleagues and libraries as to why they actually do support infrastructure modernization, but, unfortunately, can’t do anything about it other than to keep hammering down on the least powerful (e.g., with individual mandates) and let the main obstacles to modernization remain in place.

Like this:

Like Loading...
Posted on September 25, 2020 at 11:59 5 Comments
Sep03

Who’s responsible for the lack of action?

In: science politics • Tags: change, infrastructure, responsibility

There are regular discussions among academics as to who should be the prime mover in infrastructure reform. Some point to the publishers to finally change their business model. Others claim that researchers need to vote with their feet and change how they publish. Again others find that libraries should just stop subscribing to journals and use the saved money for a modern publishing system. Finally and most recently, people have been urging funding agencies to use their power to attach strings to their grant funds and force change where none has occurred.

I was recently interviewed by the Wissenschaftsrat, a government-appointed science policy forum, and one of their questions was also:

How can the lock-in-effect of prestigious titles be avoided or mitigated and who do you see as responsible for initiating such changes?

I replied:

We, the scientific community and all institutions supporting them, are all responsible for change.

The more relevant question is: who is in the strategically best position to break the lock-in-effect and initiate change?

  • Researchers decide if they evaluate colleagues on glamour proxies that deteriorate the reliability of science by valuing “novelty” above all else, or if they stand up and demand an infrastructure from their institutions that supports reliability, saves time and provides for an optimized workflow in which they can focus on science again, instead of being constantly side-tracked by the technical minutiae of reviews, meetings, submissions, etc.
  • Libraries decide how to spend their ~10b€ annually: on subscriptions/APCs in opaque and unaccountable negotiations, exempt from spending rules or on a modern infrastructure without antiquated journals and with a thriving, innovative market that allows them to choose among the lowest responsible bidders?
  • Funders decide whether to support scientists at institutions that fund monopolists and reward unreliable science, or those that work at institutions which spend their infrastructure and research funds in a fiscally responsible way to provide an infrastructure that preserves not only text, but data and code as well, ensuring the reliability and veracity of the results.

Right now, it seems only few realize their responsibility and even fewer are even considering their strategic position for change. Until now, many seem to think researchers need to change, but they can reasonably claim that they cannot risk their or their co-workers careers. For many years, some of us have tried to convince libraries to spend their funds more responsibly, but they can reasonably claim that neither can any library make such a change alone, nor can they divert their funds without faculty support. I have yet to hear a similarly convincing argument why funders need to coerce individual researchers rather than their institutions, but I am sure they as well will soon have an analogously reasonable claim as to why they also need to make things worse while intending to make things better.

The remark to funders, of course refers to current initiatives (such as PlanS and others) to force researchers to publish only in certain, compliant venues. This is, rather obviously, a suboptimal approach.

Of course, the road to hell is paved with good intentions and all players here are demonstrably well-intended. There is only one group of participants which are not well-intended and they don’t need to be: academic publishers.

Publishers, have absolutely no obligation or responsibility for change: their sole purpose, their fiduciary duty, even, in cases where the publishers are publicly traded, is to maximize profits in any legal way they see fit. Following market rules and capitalist logic, publishers today excel at avoiding competition, reducing their costs and increasing their revenue, year in year out, whether there is a global financial crisis or not. Paragons of capitalist work-ethics, the most profitable of them sport margins between 35-40% for at least the last decade or two. It is clearly not their fault if we academics create a perfect scenario for capitalist success at the expense of the public. Publishers only milk the academic cash-cow for all it is worth and we have proven to be such sheepish hosts, that the parasites do not even have to hide their disdain for us suckers any more.

Like this:

Like Loading...
Posted on September 3, 2020 at 15:27 1 Comment
Jul16

Tagging and knocking out FoxP with CRISPR/Cas9

In: own data, science • Tags: Buridan, Drosophila, FoxP, operant

The FoxP gene family comprises a set of transcription factors that gained fame because of their involvement in the acquisition of speech and language. While early hypotheses circulated about its function as a ‘learning gene’, a simultaneous “motor-hypothesis” stipulated that the gene may be more of a motor learning gene, involved in different kinds of motor learning, one of which is speech acquisition. Work in animals as diverse as mice and fruit flies over the last 20 years has firmly established at least some of the FoxP genes as crucial for motor learning tasks that are not involved in language.

Now, our graduate student Ottavia Palazzo (with invaluable support and training from Mathias Rass from the neighboring Schneuwly lab) has generated and thoroughly characterized a set of new transgenic fly lines to help us better understand the role of FoxP in the form of motor learning we are studying, operant self-learning.

Using CRISPR/Cas9 with homology-directed repair, she tagged the FoxP gene in two different ways. In one line, she tagged the gene such that we can express a fluorescent protein in all neurons where any of the three different isoforms is expressed, that this gene can give rise to. In the other line, she tagged only the one isoform that we think is associated with operant self-learning.

This one isoform is expressed throughout the adult brain of the fly, but not in the mushroom bodies, where a few previous reports had detected it, using a technique which can sometimes lead to incorrect expression patterns. In fact, because three previous studies reported three different expression patterns for the same gene, we chose this particular tagging strategy to avoid the problems associated with this technique. Ottavia found about 1200 neurons expressing the isoform we were interested in:

Drosophila brain iwth FoxP expression pattern
FoxP isoform B (green) and neuropil counter-staining (red)

Contrasting the expression of this isoform with the expression of the other two isoforms, revealed an additional ~600 neurons which express one or both of them (marked in blue):

Also here, not a hint of any expression in the mushroom bodies. Given the previous reports, we looked particularly closely not only in adults, but also in larvae, but could not find any expression. We also used an antibody (which we verified against mutants and our tagged lines to be highly specific) and found no expression in mushroom bodies. Our results with two different genomically tagged lines and the antibody corroborate earlier work with a differently tagged line and a reporter gene approach, which also failed to detect expression in the mushroom bodies. Given the multitude of different approaches all converging on identical expression patterns, it seems now clear that mushroom body Kenyon cells do not express FoxP above the detection threshold of these techniques. If the levels of expression that we detect are necessary for the physiological function of FoxP, it is conceivable that any expression below these thresholds may likely be physiologically irrelevant.

To see if there are any general problems with these fly lines (which may be problematic for the subsequent learning experiments), Ottavia tested the flies in Buridan’s paradigm. In case you haven’t heard about this experiment, here’s a short video I made 10 years ago:

Not unexpectedly, she found that the insertions she had made disrupted FoxP expression and had substantial effects on walking and landmark fixation:

Not only do the flies with homozygous insertions walk more slowly, they also fixate the stripes less and walk less straight (meander).

She also tested one of the original FoxP mutations, the widely used 3955 allele, and found similar defects:

Curiously, one of the studies that had (erroneously?) detected FoxP in the mushroom bodies, failed to detect this rather conspicuous (~20% or more) walking defect in these mutant flies, despite testing for it. At the time, I had already noticed that their control experiments lacked the sophistication to capture the motor defects I thought were most critical for their experiments, but apparently they were not even sensitive enough to detect such major defects. In summary, in this (Science!) paper, the authors detected FoxP where there apparently isn’t any, but failed to detect a severe motor phenotype, despite looking for it.

Ottavia also created a CRISPR line to knock FoxP out when and where she wanted. In one of her experiments, she knocked the gene out in early pupae or adult flies and found that this left walking behavior and landmark fixation of the flies unaffected. In other words, for these behaviors, FoxP is only important during development.

Similarly, knocking FoxP out of dorsal cluster neurons (important for stripe fixation and expressing FoxP) or mushroom body Kenyon cells (important for walking and stripe fixation, but not expressing FoxP), also had no effect:

On the other hand, when she excised the FoxP gene from motorneurons (e.g., with the D42 driver) or from neurons in the protocerebral bridge (with the cmpy driver), she saw almost the same effects as if she had deleted the gene constitutively:

The next step in this line of work will be to see which of these manipulations fly enough for the toque learning experiments and to see which neurons need FoxP for operant self-learning.

CRISPR/Cas9 was a new technique for our lab and it worked exceedingly well, both for tagging the gene and for knocking it out. In our hands, it worked with high efficiency, reliably and, as far as we can tell, with no off-target effects.

The results here also contradict some prominent publications in our field, so it will be interesting what, if anything, is going to happen to reconcile the findings.

Of course, as we try to practice open science, all the raw data for this work is publicly accessible with a liberal re-use license.

Like this:

Like Loading...
Posted on July 16, 2020 at 11:29 Comments Off on Tagging and knocking out FoxP with CRISPR/Cas9
Mar03

The ultimate Open Access timeline

In: science politics • Tags: fun, funders, open access, publishing, timeline
NIH, 1961: Journals are slow and cumbersome, why don’t we experiment with circulating preprints among peers to improve on the way we do science (Information Exchange Groups)?
Publishers, 1967: You have got to be kidding. Nobody cares about improving science, stop it, do it now.
Physicists, 1991: Hey, look, there is this cool online thing where circulate preprints for nearly free and everyone can read them (arxiv).
Libraries: Yay, we can pay for big subscription deals!
Publishers: Crickets (counting money)
Scholars, 1999: We can actually use that cheap online technology on a broader scale to ensure sound medical information for the world! (E-Biomed)
Publishers: No way José
Societies: But subscriptions are our money!
Scholars, 2002: Actually, this cool online thing is how we should be doing it not only in physics (BOAI).
Publishers: Let’s replace paywalls for reading with paywalls for publishing (BMC, PLoS)
Also publishers: Making money with bulk publishing is so gross! Let’s make money with bulk publishing ourselves (Megajournals, hand-me-down journals).
Libraries: Can we justify our existence by just paying for stuff?
NIH, 2005: Pretty please, if you have one of our grants, could you put a copy into our PMC?
Scholars: Huh?
Publishers, 2007: Open science is junk science (PRISM/Dezenhall)
NIH, 2008: If you take money from us, you have to make the papers open (OA mandate)
Publishers: But nobody can distinguish our copy from the one of the authors! We need to have exclusive money-making embargos on our papers or we lose our 36% profit margin!
NIH: Mkay. On top, we’ll make tax payers pay for the open part, too (PMC). Wouldn’t dream of risking your profit. Like that?
Publishers, 2011: Let’s use all that money we got from the libraries to pay politicians so they sponsor a bill that makes all this NIH ‘open’ BS illegal! (RWA)
Biologists, 2013: Hey, look, it only took us 52 years to recover from publishers shutting down our Information Exchange Groups and now we too can do what physicists have been doing for the last 22 years! (biorxiv)
Publishers: We can actually do the cheap publishing, too – with peer-review on top! (F1000Research, ScienceOpen)
Scholars: Does publishing there get me a job?
Libraries: Can we pay for cheap publishing, too?
Publishers, 2017: We can actually create a market where we all have to compete with our services such that prices stay down and the competition drives innovation! (ORC)
Libraries: In case we can’t pay for it any more, can you funders do that?
Funders: Oh, sure this is cool, we want to have those! (Wellcome Open Research, Gates Open Research, etc.)
Scholars: Open what?
States, countries et al.: This is really getting ridiculous. We really have to stop these rip-off subscriptions and show the publishers who’s their daddy. My way or the highway!

OK, we’ll pay them even more if only they make the papers finally open – next year or so. Fine, some supra-inflation price increases are only fair. And you know what? Surveillance capitalism is all the rage right now, it’s totally cool to hand over usage data from readers to publishers, ok? That’s how things are these days, get over it.
EU funders, 2018: If you take our money, you have to make your papers open, but no money-making embargo allowed this time! Also, no more hybrid double dipping! (Plan S)
Publishers: Hmm, surely nobody is going to notice if we just add an “X” to the title of our hybrid journal, pretend it’s now two journals and keep double dipping? (mirror journals)
Scholars: But by threat of burger-flipping we have no choice but to salami-slice our discoveries into tiny morsels that need to be sexed up beyond recognition so the Nature editors don’t see how incremental our work is. So because of this academic freedom we really won’t make our papers open.
Libraries: Should we pay for mirror journals?
Societies: Now you are really trying to kill societies! Don’t you love what we do? Isn’t our mission to the general public worth millions and millions of library money? We need to stop this silly ‘open’ trend from re-surfacing in the US and tell Trump to Make American Science Great Again (AAP letter). It worked for E-Biomed in ’99 and it’s going to work again.
Libraries: There has got to be a way for us to pay for something in there! Yes, here’s the DEAL: we just got some power back by finally being able (thanks sci-hub!) to cancel the 30-year-old Big Deal subscriptions, so with this new-found power let’s hand our cojones right back to the publishers on a silver platter by making Big Deal publishing subscriptions with them that no sci-hub can ever liberate us from!
No-deal scholars: OK, I can publish for free in a journal with long titles, or I can take a loan and publish in a journal that gives me tenure? Tough choice! Thanks for nothing, OA wackaloons!
Publishers, 2020: Yesss, my prrrrrecioussss – how about paying us some money just for not rejecting your paper right away? (EPCs)
Libraries: Can we pay those, too?

So this is essentially what happened instead of us sitting down and thinking how we could spend our money in the most technologically savvy way to the benefit of science, scholars and society. A generation later, roughly US$300 billion poorer and none the wiser, it seems.

For a serious timeline, or for looking up the references in this one, see the Open Access Directory.

Like this:

Like Loading...
Posted on March 3, 2020 at 15:04 1 Comment
Dec11

Elsevier now officially a “predatory” publisher

In: science politics • Tags: Elsevier, predatory publishing, publishing

For a number of years now, publishers who expect losing revenue in a transition to Open Access have been spreading fear about journals which claim to perform peer-review on submitted manuscripts, but then collect the publishing fee of a few hundred dollars (about 5-10% of what these legacy publishers charge) without performing any peer-review at all. Identifying such journals, however, in order to study if they have any actual detrimental effect on scholarship beyond the claims of these publishers with their commercial interests has proven difficult, as clearly defining the properties such so-called “predatory” publishers is problematic. Today, a new, consensus definition of a “predatory” publisher or journal was published:

Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices

Given that such a definition has proven so difficult over the years, let’s go through each point and see if a legacy publisher such as, say, Elsevier fits that definition:

1. entities that prioritize self-interest at the expense of scholarship

Elsevier consistently prioritizes mega-profits over scholarship. Too many examples to list, would need new server, so here is some more.

Check

2. false or misleading information

Elsevier published nine fake journals. And, of course, Dezenhall/PRISM and many other FUD campaigns, past and ongoing. Extensive track record.

Check

3. deviation from best editorial and publication practices

Chaos, Solitons and Fractals? The recently sold journal “Homeopathy“? Ghostwriting? Publishing obvious fakes?

Check

4. lack of transparency

Widespread use of non-disclosure agreements in subscription contracts.

Check

5. aggressive and indiscriminate solicitation practices

Everybody who has received a “call for papers” outside their fields from an Elsevier journal raise their hands. Advertising extra products or database access to authors? Aggressive and misleading negotiation tactics?

Check

I’m glad the article with the “predatoy” definition came with a handy cartoon (originally by David Parkins, modified by me)

So as far as this exercise goes, at least one of the main legacy publishers fits the five criteria for being branded a “predatory” publisher. According to the authors of the definition, this is the first step to solving the problem of predatory publishing. Personally, I’d say that canceling all subscriptions and “transformative agreements” to prevent further funding of predatory publishers would be a reasonable next step, now that we know how to identify them.

Like this:

Like Loading...
Posted on December 11, 2019 at 18:51 4 Comments
  • Page 5 of 22
  • « First
  • «
  • 3
  • 4
  • 5
  • 6
  • 7
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,082 views)
  • Sci-Hub as necessary, effective civil disobedience (23,035 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,919 views)
  • What should a modern scientific infrastructure look like? (11,477 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d