bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 85 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 383 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 658 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1556 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1498 downloads 0.00 KB
Download
Mar31

How to convince faculty to support subscription cancellations

In: science politics • Tags: cancellations, libraries, subscriptions

There have been repeated online discussions about my suggestion to libraries that now would be an excellent time to start cancelling subscriptions. Prime counter-argument is that librarians risked their jobs or at least face faculty backlash if they did that. Personally, I have witnessed many such cancellations and there has never been a riot or even a librarian reprimanded, let alone fired. Not even when a library once had to cut 50% of all subscriptions. In fact, there is now a growing list of reports of painless big deal subscription cancellations. Faculty understand, there are limits to budgets. Faculty are resourceful in finding the literature, even before we had so many new tools at our disposal.

However, things are different in different countries and different institutions and with different faculty. My experience may not be representative. In either case, it doesn’t hurt to have faculty on your side. In fact, I strongly advocate much more close collaborations between faculty and their librarians than we have now. At this point in time I consider librarians our closest allies and most important institutional partners. After all, who else would be more qualified, predestined, even, to help us implement a modern infrastructure?

I have outlined before, why librarians are in an excellent position right now to take the next step: the hands of faculty are pretty much tied at this point. Moreover, that goes without saying, librarians are the most competent people in this matter (maybe together with the few computer science faculty who actually work in this field).

Here’s a short, non-exhaustive list of arguments I think ought to be very convincing for all but perhaps the most Luddite faculty,  in defense of a budget shift from subscriptions to in-house infrastructure. Of course, one would preface such a list with a short explanation as to what is being argued:

“Dear faculty member,

as you may have heard in the news, our institution has joined a global initiative of hundreds of other scholarly institutions which strives to modernize our scholarly infrastructure. Our infrastructure has not undergone extensive modernization since its inception in the early 1990s, so the modernization is long overdue. One part of the modernization entails moving subscription funds over to infrastructure funds. Above and beyond the technical limitations of subscription literature (with which you are likely more than familiar), there are many other reasons why subscriptions are among the worst technologies to subsidize with tax funds. Here are some of the reasons why we now have to cancel subscriptions and how you will directly benefit from the consequences of these cancellations:

  • subscription funds go to corporations that waste >90% of the public moneys spent on them. Only their shareholders benefit
  • you have likely endured many cancellations in the past that came without any added benefit to you, beyond saving your institution money. This time, we will use the saved money to implement services that will benefit you directly: they will minimize your tedious work with writing, reading, data management and code, so you can focus on your research even more. Stay tuned, these services will be presented shortly.
  • once we have successfully transitioned, there will likely be funds left over, which will flow right back into your research budget
  • oh, and if you use our shiny new tools, you won’t even notice that we’re canceling subscriptions, as these tools will fetch (almost) all your literature for you
  • please pardon the dust while we remodel
  • for a more exhaustive list of benefits, please see [list of benefits]
  • please feel free to contact us at any time in case you feel your personal needs have not been addressed satisfactorily”

Of course, one would formulate these arguments a little more professionally than I have done here, but I wanted to convey the gist of where the thrust of the argument might go.

Like this:

Like Loading...
Posted on March 31, 2017 at 11:00 3 Comments
Mar22

Please address these concerns before we can accept your #OA proposal

In: science politics • Tags: flipping, infrastructure, publishing

Below, I’ve taken the liberty to “peer-review” recent proposals to ‘flip’ subscription journals to open access

The applicants have provided an interesting  proposal of how to ‘flip’ the current subscription journals to an article processing charges (APC)-based ‘gold’ open access (OA) model. The authors propose to transition library subscription funds to reimburse author-paid APCs. This should be done by each institution first analyzing their current subscription portfolio and then introducing open access (OA) funds to cover article expenses instead. Importantly, maximum APC limits (“caps”) are suggested as a central measure to limit future APC increases. The proposal, while interesting and potentially ground-breaking, cannot be accepted in its current form. In particular, it seems the authors have overlooked an alternative route, which is cheaper and nevertheless holds several additional benefits not provided by the current proposal. In fact, there is a risk that the current proposal may come with some untended consequences that could deteriorate the already sub-optimal status quo. Before I can recommend accepting this proposal, the authors at least need to address the following major concerns (no particular order):

  1. Given that high-ranking journals publish the least reliable science, how does maintaining this pernicious hierarchy address the replication crisis? How is more public access to less reliable science benefiting the public?
  2. Why, in 2020, would anyone even think 17th century “journals” are even a technologically useful concept worth spending billions every year to subsidize? Why, in 2020, would selection and publishing still be bundled up as if to pretend printing presses were still a thing?
  3. As scholars are already evaluated according to the amount of tax funds (grants) they spend on experiments – with more (not less!) tax funds spent indicating a more competitive scientist – would it not be inconsequential if they were not also evaluated on the amount of tax funds they spent on publications? Especially if APCs scale with prestige, such that more money being spent indicated more prestige? What would keep evaluation committees from treating APC $$$ analogously to grant $$$? Why should the current impact factor not simply be replaced by the APC amount paid?
  4. If the prestige universities sell is analogous to the prestige journals sell, has competition among universities significantly reduced tuition fees? If tuition has not decreased recently, why is the prestige universities sell not analogous to the prestige journals sell?
  5. With the vast majority of all OA journals charging no APC at all competing with the few that do charge, APCs are nevertheless increasing at above inflation rates (up to 70% year-over-year). Additionally, some publishers already today charge APCs above the current average subscription price. In the light of this evidence, how can more journals that charge APCs be more of a competition than those that do not charge at all?
  6. If institutions choose/have to cap the amount of funds they reimburse their members for APCs, how does this not amount to a disadvantage of the scholarly poor? If they do not cap the amount, how can APC increases be controlled? In other words, why should a high-ranking journal not be able to charge an APC of US$50k for a paper that virtually guarantees tenure or funding, when Harvard can charge US$250k for a degree that virtually guarantees employment?
  7. Given that even the most optimistic estimates only project a meager 20-30% short-term cost reduction without being able to guarantee sustained reductions in the future (indeed, there is evidence already now, when most OA journals do not charge any APC at all, of above-inflation APC increases!), why would one not instead champion a scenario with an already existing business model and market, where sustained reductions on the order of 90% were virtually guaranteed? (I am referring to the alternative route: migrating to a service-based market, with companies like Scholastica or Ubiquity – if their services became too expensive, institutions are free to choose to switch or run the infrastructure themselves)
  8. Why would anyone want to pay billions in excess of actual costs merely to protect the balkanization of our literature into >30k journals, effectively preventing content mining and hampering the implementation of an efficient recommendation system for our literature?
  9. With the abysmal track records of publishers first hogging our copyright and then putting APC-enabled articles back behind paywalls, why should we continue to trust these for-profit entities with our most valuable assets, our literature, instead of merely allowing them to compete with other companies for being allowed to polish our crown jewels?
  10. How will flipping to a different business model address our need for a sustainable infrastructure covering our data and code, at the same time as it is addressing this (non-exhaustive) list of missing/lacking functionalities in our scholarly literature?
    • Link-rot
    • No scientific impact analysis
    • Lousy peer-review
    • No global search
    • No functional hyperlinks
    • Useless data visualization
    • No submission standards
    • (Almost) no statistics
    • No content-mining
    • No effective way to sort, filter and discover
    • No semantic enrichment
    • No networking feature

The point of these questions is, of course, that all of them would be addressed if libraries and institutions kept our publications in-house, according to modern infrastructure standards, with a flourishing service market to constantly develop and improve their functionality and usefulness for the scientific community and the public. Initiatives spearheading such a business model (still in the deprecated, legacy journal format, though) are the Open Library of the Humanities, lingOA, mathOA, psyOA and other nascent fairOA organizations. The authors should discuss why their more expensive option is so superior to the less expensive alternative route that does address every one of the points made above, in particular taking into account that public institutions are commonly required to choose the lowest responsible bidder.

Failing to adequately address these reviewer concerns should result in a rejection of the proposal.

UPDATE: and as if right on cue, a leaked document evinces how publishers will try and subvert any ‘flipping’ negotiations.

Like this:

Like Loading...
Posted on March 22, 2017 at 13:04 2 Comments
Mar07

Are we inadvertently supporting the defunding of public science?

In: science politics • Tags: academia, journals, public science, publishing

There can be little doubt that the defunding of public academic institutions is a main staple of populist movements today. Whether it is Trump’s budget director directly asking if one really needs publicly funded science at all, or the planned defunding of the endowments of arts and humanities or the initiatives to completely abolish the EPA and other science agencies. Also across the pond, there are plenty of populist parties and other Trump fans who certainly also strive to mimic their idol and rid their countries of intellectuals who may see through their shenanigans, or use evidence to oppose their policies.

For decades, these anti-intellectual forces have been fighting science tooth and nail around the globe. Recently, in the UK, a website, whose former boss is now the main advisor to Trump in the White House, titled: “When you hear a scientist talk about ‘peer-review’, you should reach for your Browning” (link to archive.org snapshot). The author argues that the scholarly literature we refer to when making scientific claims was written by “charlatans and chancers”.

The horrifying aspect of this article is that from their perspective, this isn’t even ‘fake news’. Already in 2005, John Ioannidis found that “most published research findings are false“. Since then, numerous reports have been published which seem to suggest that our scholarly literature is much less reliable than one would desire. While the scope of irreproducibility is not yet clear, the overwhelming majority of scientists believes the problems are bad enough to justify the term ‘replication crisis’:

Besides criminal intent and inadequate training, one major factor in the potential unreliability of our scholarly literature is socioeconomic in nature. The two factors deemed to contribute most to irreproducible research were selective reporting and the pressure to publish:

In an attempt to provide accountability and to combat biases in scholarship, we have introduced quantitative measures to assess individual scholars. Most widely used in hiring, promoting and funding scholars are two general aspects: quality and productivity of scholars. To quantify quality, individuals are commonly assessed by where they publish, for productivity, how much they publish. Both of these quantifications have been shown to be highly problematic, especially in today’s hyper-competitive scholarly environment.

In the fields where scholars publish their experiments in journals, it has been shown that the most prestigious journals publish the least reliable science. So by preferentially hiring, promoting and funding scientists who publish in these journals, we are also rewarding unreliability, to some extent. Similarly, if we count the number of journal articles with novel findings, we are selecting for underpowered studies with erroneous conclusions.

Taken together, these data suggest that even if the reliability of our current literature turns out to be higher than the initial reports suggest, we are currently running a system poised to make it less reliable every year. Consequently, inasmuch as anti-science forces seize on the unreliability of the scholarly literature to support their defunding plans, the horrifying possibility emerges, that every stakeholder – academic, librarian or publisher – who, willingly or not, props up the current journal system, may be inadvertently supporting anti-intellectual agendas.

Like this:

Like Loading...
Posted on March 7, 2017 at 10:44 Comments Off on Are we inadvertently supporting the defunding of public science?
Feb09

Data structures for Open Science

In: own data • Tags: data structure, metadata, open data, open science

For the last few years, we have been working on the development of new Drosophila flight simulators. Now, finally, we are reaching a stage where we are starting to think about how to store the data we’ll be capturing both with Open Science in mind, but particularly keeping in mind that this will likely be the final major overhaul of this kind of data until I retire in 20 years. The plan is to have about 3-5 such machines here and potentially others in other labs, if these machines remain as ‘popular’ as they have been over the last almost 60 years. So I really want to get it right this time (if there is such a thing as ‘right’ in this question).

Such an experiment essentially captures time series with around 70-120k data points per session, where about 3-6 variables are stored, i.e., a total of at most ~500-800k table cells per session, each with 8-12bit resolution. There will be at most about 8-16 such sessions per day and machine, so we’re really talking small/tiny data here.

Historically (i.e., from the early 1990s on), these data were saved in a custom, compressed format (they needed to fit on floppy disks) with separate meta-data and data files. We kept this concept of separated meta-data from data also in other, more modern set-ups such as our Buridan experiments. For these experiments, we use XML for the meta-data files (example data). One of our experiments also uses data files where the meta-data are contained as a header at the beginning of the file with the actual time-series data below (example data). That of course makes for easy understanding of the data and makes sure the meta-data are never separated from the raw data, i.e., less potential for mistakes. In another, newer, experiment we are following some of the standards from the Data Documentation Initiative (no example data, yet).

With all of these different approaches over the last two decades, I thought I ought to get myself updated on by now surely generally agreed on conventions for data structure, meta-data vocabularies, naming conventions, etc. I started looking around and got the impression that the different approaches we have used over time are still being used and then some new ones, of course. I then asked on Twitter and the varying responses confirmed my impression that there isn’t really a “best-practice” kind of rule.

Given that there was quite a lively discussion on Twitter, I’m hoping to continue this discussion here, with maybe an outcome that can serve as an example use case someday.

What do we want to use these data for?

Each recording session will be one animal experiment with different phases (“periods”) during the experiment, for instance some “training” sessions and some “test” sessions with experimental conditions differing between training and test. The data will be saved as time series data continuously throughout the experiment, so the minimal data would be a timestamp, the behavior of the animal and a variable (stimulus) that the animal is controlling with its behavior. Thus, in the simplest case, three columns of integers.

The meta-data for each experiment has to contain a description of the columns, of course, as well as date and time at the start of the experiment, genotype of the animal, text description of the experiment, DOI of the code used to generate the data, sequence and duration of periods, temperature, and other variables to be recorded or set on a per session or per period level.

A dataset or small project can consist of maybe three to four groups of experiments, let’s say one experimental genotype and two control groups. Traditionally the way we handled this grouping in most of our experiments, is to keep a text file in which the experimenter lists which file belongs to which group. That way, anybody can read the text file and get an understanding of the experimental design. The file also contains comments and notes about user observations during the experiment and a text description of the project. In a way, this text file is like a meta-data file for a data-set, rather than an individual experiment and thus should probably also contain some minimal mark-up. This text file is then read by either custom software or an R script to compile summary data for each group, e.g. means and standard errors of some variables we extract on a per period basis, plotted and compared between groups. As there are numerous ways to evaluate an animal’s behavior if we have the full time series, there is any number of different parameters one can want to extract from the data and plot/compare.

This is where the open science part would come in. Whenever the user runs the script that evaluates, plots and compares the data, the entire dataset is automatically made publicly accessible. Along with the dataset (raw data, meta-data and grouping text file), all the evaluations should also be deposited. Currently, we do this as a PDF file, but that is all but useless – only for human use. Ideally, I’d like this evaluation file to contain all the content of the grouping text file, as well as the DOI of the script that generated it and (semantic?) markup that structures the evaluation document. Such an evaluation document would be both machine and human (with a reader, which is why we started by using the PDF format) readable and provide an overview of exactly what was done to what data.

One eventual goal is to also use these evaluation documents during manuscript authoring. Instead of copying the figures, pasting them into a manuscript and then trying to describe the statistics, I’d like to just link the different evaluations from inside the manuscript. Each figure in a manuscript would then just be a link to one of the evaluations in the evaluation document, the one I want readers to see so they can follow my line of arguments. Any reader who wants to see other aspects of the data has single-click access to the entire evaluation document, with all our evaluations for this data-set, as well as access to all the code used to generate and evaluate the data, if they so wish. For this, all the data and meta-data in each dataset has to be linked to both each other, and the code and the text. Of course, all the data in a manuscript should also be linked together, even though they likely come from different datasets/projects.

With the data and code solutions we’re currently developing, this should allow us to just write code, collect data and link both into our manuscripts. Everything else (data management, DOI assignment, data deposition, etc.) would be completely automatic. Starting at the undergraduate student level, users would simply have to follow one protocol for their experiments and have all their lab-notebooks essentially written and published for them – they’d have a collection of these evaluation documents, ready to either be used by their supervisor, or to be linked in a thesis or manuscript.

So, what would be the best data structure and meta-data format with these goals in mind?

Like this:

Like Loading...
Posted on February 9, 2017 at 23:50 17 Comments
Feb03

Open Science: Too much talk, too little action

In: science politics • Tags: infrastructure, open science, publishing

Starting this year, I will stop traveling to any speaking engagements on open science (or, more generally, infrastructure reform), as long as these events do not entail a clear goal for action. I have several reasons for this decision, most of them boil down to a cost/benefit estimate. The time spent traveling does not seem worth the hardly noticeable benefits any more.

I got involved in Open Science more than 10 years ago. Trying to document the point when it all started for me, I found posts about funding all over my blog, but the first blog posts on publishing were from 2005/2006, the announcement of me joining the editorial board of newly founded PLoS ONE late 2006 and my first post on the impact factor in 2007. That year also saw my first post on how our funding and publishing system may contribute to scientific misconduct.

In an interview on the occasion of PLoS ONE’s ten-year anniversary, PLoS mentioned that they thought the publishing landscape had changed a lot in these ten years. I replied that, looking back ten years, not a whole lot had actually changed:

  • Publishing is still dominated by the main publishers which keep increasing their profit margins, sucking the public teat dry
  • Most of our work is still behind paywalls
  • You won’t get a job unless you publish in high-ranking journals.
  • Higher ranking journals still publish less reliable science, contributing to potential replication issues
  • The increase in number of journals is still exponential
  • Libraries are still told by their faculty that subscriptions are important
  • The digital functionality of our literature is still laughable
  • There are no institutional solutions to sustainably archive and make accessible our narratives other than text, or our code or our data

The only difference in the last few years really lies in the fraction of available articles, but that remains a small minority, less than 30% total.

So the work that still needs to be done is exactly the same as it was at the time Stevan Harnad published his “Subversive Proposal” , 23 years ago: getting rid of paywalls. This goal won’t be reached until all institutions have stopped renewing their subscriptions. As I don’t know of a single institution without any subscriptions, that task remains just as big now as it was 23 years ago. Noticeable progress has only been on the margins and potentially in people’s heads. Indeed, now only few scholars haven’t heard of “Open Access”, yet, but apparently without grasping the issues, as my librarian colleagues keep reminding me that their faculty believe open access has already been achieved because they can access everything from the computer in their institute.

What needs to be said about our infrastructure has been said, both in person, and online, and in print, and on audio, and on video. Those competent individuals at our institutions who make infrastructure decisions hence know enough to be able to make their rational choices. Obviously, if after 23 years of talking about infrastructure reform, this is the state we’re in, our approach wasn’t very effective and my contribution is clearly completely negligible, if at all existent. There is absolutely no loss if I stop trying to tell people what they already should know. After all, the main content of my talks has barely changed in the last eight or so years. Only more recent evidence has been added and my conclusions have become more radical, i.e., trying to tackle the radix (Latin: root) of the problem, rather than palliatively care for some tangential symptoms.

Besides the apparent lack of efficacy, the futility of trying to convince scholars that something needs to change was perhaps most obviously demonstrated in the almost embarrassing “Cost of Knowledge” debacle of 2012. The name implies that Elsevier was supposed to be charging too much for their subscriptions and needed to be punished by the scholarly community. The common procedure for consumers to protest corporate behavior is to actually boycott the company, which in most cases entails withholding purchases from the company. However, the few (at the time of this writing 16541 of about 700,000 Elsevier authors/reviewers out of an estimated total of about 7 million researchers world-wide) scholars who signed up, never even attempted to get their libraries to drop subscription purchases. Instead, they merely pledged to stop authoring, reviewing or editing manuscripts for the publisher giant. In other words, the “boycott” resulted in exactly zero dollars/euros loss to the multi-million dollar corporation. Inasmuch as Elsevier was able to replace some remunerated editors with cheaper individuals, the “boycott” may even have been a small net positive for the company books. There really is no such thing as bad PR.

As ineffective as such approaches obviously have been over the last decade or two, there can be no dispute that now a lot more people are talking about these issues. Given perhaps another 23 years or 50, there may even be some tangible effects down the road – as long as one assumes some sort of exponential curve kicking in at some point fairly soon. It sure feels as if such an exponential curve may be about to bend upwards. With the number of Open Science events, the invitations to talk have multiplied recently, giving me the fuzzy warm feeling that people other than myself enjoy hearing me talk, too (at least as the one who accepted after the eight previous invitees have declined). Before I got tenure, it was also helpful to add each invited presentation as a line to my resume. Now I have tenure, so I neither need yet more ego-stroking nor more lines on my CV. I’ll retire in pretty much 20 years, so any change that doesn’t happen within the next 2 years or so is essentially too late to invest in.

In addition to writing online and speaking publicly, I have also authored a review article on the unintended consequences of journal rank and joined organizations and initiatives where I hoped they might be a potential way towards an actual boycott, namely to start dropping subscriptions on a massive scale. In terms of tangible effects, the outcome of all of these efforts over these last dozen years or so can be reliably quantified at exactly zero, no error bars, no decimals.

In the light of this anecdata, it is straightforward to now change strategy. As I would expect the number of events where specific courses of actions (see update below) will be planned continue to remain as scarce as they are now, declining future speaking invitations should free up some significant time that I can spend back here in my laboratory. My plan is to invest this time not only into developing new experiments, but also to make sure our lab will be outfitted with some core elements of the kind of infrastructure I hoped we may one day all buy from all that money we wouldn’t be wasting on subscriptions any more.

We’ve already started by making some of our experiments publish their raw data automatically by default. This will be expanded to cover as many of our experiments as technically feasible. To this end, we have started to work with our library to mirror the scientific data folders of our harddrives onto the library and to provide each project with a persistent identifier whenever we evaluate and visualize the data. We will also implement a copy of our GitHub repository as well as our Sourceforge code in our library, such that all of our code will be archived and accessible right here, but can be pushed to whatever new technology arises for code-sharing and development. Ideally, we’ll find a way to automatically upload all our manuscripts to our publication server with whatever authoring system we are going to choose (we are testing several of them right now). Once all three projects are concluded, all our text, data and code will not only be open by default, it will also be archived, backed up and citable at the point of origin with a public institution that I hope should be likely to survive any corporation.

This infrastructure, since we will be the only ones using it, won’t contain any of the technology that would make for a significant scientific benefit, but at least there will be some personal benefit: the infrastructure will sustainable archive and make accessible all of our text, data and code without any additional workload for us. That way at least I get to enjoy a small fraction of the benefits I was hoping for when I started a little over a decade ago. Apparently, I’m one of only very few who even feel the need for such services, so why should I keep trying to convince anybody else of it?

If all goes as planned, another benefit of this change may hopefully be that I’ll be able to spend more evenings with my family.

 

UPDATE (3/3/2017): In the light of recent exchanges, I’ll attempt to clarify further:

Essentially, all I have been saying at such events can be summarized in three simple steps:

  1. Journals are detrimental to scholarship in many ways, here’s the evidence
  2. Subscriptions to these detrimental journals are so outrageously expensive, the costs prevent us from fixing our infrastructure so we can modernize, protect and keep improving scholarship
  3. Logical consequence from 1+2: cancel subscriptions to free up the funds to buy/implement the components of our infrastructure that will help us modernize, protect and keep improving scholarship

Both 1 and 2 have been discussed and debated in many fora now so many times, by many more and much more able, knowledgeable and gifted people than me that it’s hard to believe anybody beyond newcomers or the willfully ignorant could not have arrived at these insights by now.

Hence, I am still teaching 1+2 to the students here locally.

I will still try and attend any event that aims to directly tackle the problems towards achieving 3. Such events may define goals such as:

  1. Strategies to convince libraries that it is in everybody’s (especially their own) interest to cancel subscriptions now
  2. Strategies to enable libraries to spend the saved subscription funds on infrastructure investments
  3. Developing standards and defining necessary and desired functionalities for a modern scholarly infrastructure
  4. Strategies to deploy a minimal core set of functionalities to cooperating institutions

or anything in this direction.

Like this:

Like Loading...
Posted on February 3, 2017 at 16:16 8 Comments
Dec21

Why did the moth fly into the flame?

In: own data • Tags: decision-making, Drosophila, photopreference, phototaxis

Few insect behaviors are more iconic than the proverbial moths circling the lamps at night.

Artist: Dave McKean

Artist: Dave McKean

These observations are prime examples of the supposedly stereotypic insect responses to external stimuli. In contrast, in our new paper that just appeared today, we describe experiments suggesting that insects appear to make a value-based decision before approaching the light. However, compared to us, an insect’s decisions can take very different aspects into account. For instance, when a fruit fly (Drosophila) decides whether to approach light, it takes its flying ability into account. If any parameter of flight is sufficiently compromised, it is better to hide in the shadows, whereas the full ability to fly emboldens the animal to seek out the light. In a way, these results are reminiscent of the confidence with which a certain cartoon bee approaches a window:

Bee Movie - This Time!

Bee Movie – This Time!

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

Perhaps the most beautiful result of this work is the first investigation into the neurobiological mechanisms underlying the different valuation of light (or dark) stimuli in flying and non-flying flies. We found that the tendency of flies to approach or avoid light is not an all-or-nothing decision, but that different fly strains and different manipulations of flying ability show different degrees of approach/avoidance as well as indifference. Experiments with transgenic flies showed that we can push the flies’ preference back and forth along this ‘photopreference’ gradient by activating or inhibiting neurons that secrete either dopamine or octopamine, respectively. Octopamine and Dopamine are so-called neuromodulators, known to be responsible for valuation processes in other experiments across animals and in the case of dopamine also humans. Commonly, they do this by modulating the activity of neurons involved in processing sensory stimuli, such that the value of these stimuli to the animal changes after the modulator is applied.

In our case, activating dopamine neurons made flightless flies which would otherwise avoid light, approach it. Activating octopamine neurons, on the other hand, made normal flies, which approach light, hide in the shadows, despite the manipulation leaving their flying capabilities intact. The results obtained after inhibiting these neurons were mirror symmetric: blocking dopamine neurons from firing made the flies seek darkness without affecting their flying ability. Wingless flies with their octopamine neurons blocked approached the light as if they could fly. These results inspired the following illustration of how these neuromodulators may cooperate to orchestrate the evolutionarily advantageous decision for insects when faced with a light/dark choice:

Illustration of the hypthetical balance between octopaminergic/tyraminargic (OA/TA) and dopaminergic (DA) neurons stabilzing the valuation of light/dark stimuli in fruit fly photopreference experiemnts.

Illustration of the hypothetical balance between octopaminergic/tyraminargic (OA/TA) and dopaminergic (DA) neurons establishing the valuation of light/dark stimuli in fruit fly photopreference experiments.

Future research will show whether these same neurons indeed change their activity when the flying ability is manipulated, as one would expect from these results.

It is possible that the evolutionary origin and ultimate ethological relevance may be found in the behavior of flies which have just emerged from their pupal case. The wings of these young adults are still folded up as the insect first needs to pump blood into the veins of the wings to expand them. During this time, the flies perch underneath horizontal surfaces and avoid light. Even when the wings are fully expanded, but not capable of supporting flight just yet, this behavior persists. Only once the wings are ready for flight, do the flies perch on top of horizontal surfaces and approach light.

newly eclosed fly

Source: hawaiireedlab.com

Adult flies in the wild that live on rotting fruit probably face the challenge of the sugary liquid of the fruit occasionally gumming up their wings, only for the next rain to wash them clean again. We mimicked this situation in one of our experiments by gluing the wings together with sucrose solution and subsequently removing the sugar glue from the flies’ wings with water. As expected, the flies approached the light before the treatment, avoided it when the wings were unusable and approached it again after the ‘shower’.

The first mention in the literature of adult flies with compromised wings being less attracted by light was by Robert McEwen in 1918. In the intervening 49 years, we could not find any mention of this phenomenon in the scholarly literature. Only in 1967, one of the founding fathers of Drosophila neurogenetics, Seymour Benzer, published work mentioning adult flies with deformed wings being less phototactic, but without any insight into the underlying neurobiology. It took yet another 49 years after Benzer’s work without any mention in the literature, before our paper described the first neurobiological components of this case of insect behavioral flexibility, 98 years after the original discoverer McEwen. Our postdoc, E. Axel Gorostiza, the first author of the paper, will start his own laboratory on this topic, so it seems unlikely that it will take yet another 49 years for the fourth publication on this topic to appear.

Of course, all our raw data are available from figshare. This was also the first paper from our laboratory where all authors were listed with their ORCID IDs and all materials and protocols were referenced with their RRIDs and protocols.io DOIs, respectively. All previous versions of the article are available as biorxiv preprints as well.


Original research article:

A Decision Underlies Phototaxis in an Insect. Royal Society Open Biology

E Axel Gorostiza, Julien Colomb, Björn Brembs

Freie Universität Berlin, Universität Regensburg, Germany.

Abstract:

Like a moth into the flame – Phototaxis is an iconic example for innate preferences. Such preferences likely reflect evolutionary adaptations to predictable situations and have traditionally been conceptualized as hard-wired stimulus-response links. Perhaps therefore, the century-old discovery of flexibility in Drosophila phototaxis has received little attention. Here we report that across several different behavioral tests, light/dark preference tested in walking is dependent on various aspects of flight. If we temporarily compromise flying ability, walking photopreference reverses concomitantly. Neuronal activity in circuits expressing dopamine and octopamine, respectively, plays a differential role in photopreference, suggesting a potential involvement of these biogenic amines in this case of behavioral flexibility. We conclude that flies monitor their ability to fly, and that flying ability exerts a fundamental effect on action selection in Drosophila. This work suggests that even behaviors which appear simple and hard-wired comprise a value-driven decision-making stage, negotiating the external situation with the animal’s internal state, before an action is selected.

Like this:

Like Loading...
Posted on December 21, 2016 at 09:30 Comments Off on Why did the moth fly into the flame?
Dec20

So your institute went cold turkey on publisher X. What now?

In: science politics • Tags: open access, publishing, subscriptions

With the start of the new year 2017, about 60 universities and other research institutions in Germany are set to lose subscription access to one of the main STEM publishers, Elsevier. The reason being negotiations of the DEAL consortium (600 institutions in total) with the publisher. In the run-up to these negotiations, all members of the consortium were urged to not renew their individual subscriptions with the publisher and most institutions apparently followed this call. As the first Elsevier offer was rejected by DEAL and further negotiations have been postponed until 2017, the participating institutions whose individual contract runs out this year will be without continued subscription access – as long as they don’t cave in and broker new individual contracts.

At first, this may seem like a massive problem for all students and faculty at these institutions. However, there are now so many alternative access strategies, that the well-informed scholar may not even notice much of a difference. Here are ten different options, in no particular order (feel free to offer more in the comments):

Keep trying to access the publisher’s site: In many cases, the institutions have signed subscription contracts with archival rights, meaning you have access to content that once was subscribed. Moreover, many journals offer a ‘hybrid’ option, meaning that some articles are made available open access by the authors paying an extra fee. In both cases, the publisher site will still provide you with access to the article in question, even though your institution has not extended the subscription.

LOCKSS: This is a solution for libraries which did not obtain archival rights with the publisher. It keeps local copies of subscribed content precisely for such cases. Ask your friendly librarian if you encounter content that you know was once accessible but is now inaccessible – your library will likely be able to assist you to get access via LOCKSS

Google Scholar: Most entries in a GScholar search come not only with a non-publisher version of the article, but even with several different access options.

PubMed: For those of you who use PubMed, they link to various versions, including the PMC version in their search results. I’ve also asked them if they can link to other freely available versions. In many cases, these version only become available after some embargo period.

DOAI / oaDOI: You can copy and paste the digital object identifier (DOI) of any article into services which locate a freely available version for you. DOAI and oaDOI search preprint archives, researchgate or institutional repositories for accessible versions.

#icanhazpdf: For Twitter users, this hashtag attached to a link to the article will alert other users of your need for this article. If someone has access, they can send you the article.

Article payment: For quick (but not free!) access to an article, just grit your teeth and pay for the article (buy or rent). Some institutions are already reimbursing such costs.

Contact author: A less speedy option is to contact the corresponding author and ask them for a copy of the article. I remember doing this via snail mail in the days before the internet – and receiving “offprint-requests” as pre-printed postcard forms (filled in with type-writer) for my articles. That’s how old I am.

Inter-library-loan: Even if more and more institutions are dropping their big deal subscriptions, there are still many subscriptions around. Your library can likely get you the article via the many different versions of inter-library-loan (“Fernleihe” in German. In Germany, there is also a fee-based library service called subito which operates on a network of libraries. Very convenient and effective – and cheap if your institution is paying for it). If you don’t know how to use this service, ask your friendly librarian for assistance.

Sci-Hub: If all else fails, there still is the option of obtaining the article from Sci-Hub. It covers roughly 50% of all articles, so there is a pretty good chance you’ll get what you need there. I have written before why I find Sci-Hub to be a necessary and effective form of civil disobedience. There is a catch, however. In many countries Sci-Hub is considered illegal as it offers copyrighted content for download. While there is no definitive, generally accepted decision, there is a lawsuit pending brought by Elsevier against Sci-Hub. Legal opinions vary, but an early consensus seems to emerge according to which individual downloads, while infringing, are unlikely to be prosecuted, but institutions which fail to follow up on publisher complaints may at some point become liable. Use at your own risk.

These are ten different options (9 of them completely legal) to obtain scholarly content without a current subscription to the scholarly journal in question. The statistics on article availability as well as my personal experience suggest that almost every article will be available via at least one or more of these options.

Importantly: if you find that you can indeed access most of the content you need to read via such means, let your librarian know that you are fine with dropping subscriptions – it will eventually allow your institution to be able to afford providing you with a modern digital infrastructure.

UPDATE (Dec. 21, 2016): There were several questions as to the legality of #icanhazpdf. Sharing of scholarly articles among scholars has been standard practice for decades, if not centuries. Hence, sending individual articles to individual scholars has never been illegal and still is not to this day. The Twitter hashtag merely brings two scholars togather for this age-old standard practice. Even Elsevier explicitly allows such sharing (PDF):

Scholarly sharing of articles [8 above]
Current ScienceDirect subscription agreements permit authorized users to transmit excerpts of subscribed content, such as an article, by e-mail or in print, to known research colleagues for the purpose of scholarly study or research. Recipients of such scholarly sharing do not themselves have to be affiliated with an institute with a ScienceDirect subscription agreement.

h/t to Jochen Johannsen and Bernhard Mittermaier for the source.

UPDATE II (Jan.27, 2017): The latest version of the Open Access Button also retrieves publicly available versions  of paywalled articles, much like DOAI or oaDOI. However, it comes with a critical improvement over these two services: it allows you to send a request for any articles that isn’t already covered and the OAButton team will try to make it available. In that way, the OAButton not only provides you with the articles, but also expands the coverage of publicly accessible research, such that ever more content becomes available without subscriptions.

This is a wonderful development. More and more services providing you with scholarly articles without a subscription. Remind me again, why do we even have subscriptions? Subscriptions probably are the worst value for money of any subsidized service our scholarly institutions provide us with. We should cancel all subscriptions now, there is no need for them and paying them constitutes fiscal irresponsibility, as far as I’m concerned.

UPDATE III (Feb 16, 2017):

It’s now more than six weeks since the German institutions lost all access to the journals of publisher giant Elsevier. You may wonder how they have fared? According to a news report:

The loss of access to Elsevier content didn’t overly disturb academic routines, researchers say, because they found other ways to get papers they needed

It’s official. It works. We don’t need subscriptions.

UPDATE IV (Mar 17, 2017):

Impactstory has come out with their browser add-on “Unpaywall” that lets you find open versions of paywalled articles with the click of a button. It’s never been easier to drop subscriptions than now!

UPDATE V (Mar 25, 2017):

On twitter, I’ve been sent two more methods of accessing paywalled scholarly content. On is called OpenDOAR and somewhat similar to DOAI/oaDOI or the open access button in that it accesses content in ‘green’ repositories. The other is Scholar on Reddit and works similarly to #icanhazpdf on twitter: you post a request and some good soul with access provides the content. So many ways to get access and it just keeps getting better. Have you already talked to your librarian and told them that you are ok with subscription cancellations?

Like this:

Like Loading...
Posted on December 20, 2016 at 14:37 6 Comments
Dec06

Should public institutions not be choosing the lowest responsible bidder?

In: science politics • Tags: publishers, subscriptions

Public institutions the world over are required to spend their funds responsibly. Commonly, this is done by requiring them to host bids for purchases or services above a certain threshold. If you work at a public institution and have wondered, e.g., why you are only allowed to buy a computer from your computing facility which only sells one particular brand, then the answer likely is that this brand won the bidding contest.

The idea here is, to quote from an old (1942) document from the US:

The awarding of contracts by municipal and other public corporations is of vital importance to all of us, as citizens and taxpayers. Careless and inefficient standards and procedures for awarding these important community commitments have increased unnecessarily the tax burdens of the public. To secure a standard by which the awarding of public contracts can be made efficiently and economically, and with fairness to both the community and the bidders, the constitutions of some states, and the statutes regulating municipal and public corporations provide for the award of public contracts to the lowest responsible bidder.

As far as I know, most countries have such purchasing rules in place for essentially every service or purchase. However, it seems one area of services is exempt from this rule: scholarly publishing services, in particular journal article publishing (not sure about books). While every major plumbing operation, every ventilation improvement and every cleaning contract needs to be signed after a competitive bidding procedure, we negotiate subscription deals behind closed doors and the signed contracts are often hidden behind non-disclosure agreements. It seems to me that the second sentence in the quote above describes the consequences of these back-room dealings quite accurately. What evidence is there to support this view?

If one looks at the costs of these subscription deals, one finds that they amount to about US$5,000 per published subscription article. However, open access publishing costs (not article processing charges, APCs!) range from below US$100 to around US$500, depending on a variety of factors. Hence, publishing services which let everybody access our literature would blow out any subscription publisher if a competitive bidding process would take place! (Note that some publishers charge their customers much more than their bare-bones publishing costs for a variety of reasons)

As everyone knows, the justification for subscriptions purchases is that the subscribed content can only be obtained at this one publisher, so there cannot be any bidding. The subscription business is essentially one of monopolies, obviously. This argument is about as superficial as it is vacuous. Institutions currently spend huge sums acquiring large collections of journals only few of which are heavily used. From a single article perspective, these collections provide a massive oversupply: institutions pay for access to many more articles than their faculty actually read. If our institutions were instead to focus on serving their faculty’s publishing rather than reading needs, the money would arguably be spent much more effectively.

For quite some time now, we have observed the development new business models such as those of Ubiquity or Scholastica. These service providers allow their clients to switch services if they are not satisfied. Let’s say we, University of XYZ,  find Scholastica’s US$100/article service is the lowest responsible bidder. After a year or two we get so many complaints from our faculty about what a horrible service this is, that we decide to have another round of bidding, where we include a more extended range of services. Let’s say the US$500 per article service of Ubiquity wins the bidding this time. University of XYZ can easily switch, without losing access to any of the published articles, simply because the articles remain under the control of University of XYZ. From one year to the next, the service provider switches and our faculty are much happier than before. University of XYZ can make a good case that it is getting a better value for money now than it did with the nominally cheaper option, because it still went with the lowest responsible bidder. Such a situation would create a truly competitive service market (as long as anti-trust regulations remained in effect).

Conversely, does this technical possibility mean that public institutions who are still negotiating with individual subscription publishers without a competitive bidding process could now be sued ?

Phrased differently, now that we no longer have to hand over our manuscripts to publishers for them to create a monopoly with our work, aren’t we legally required to make sure there can be a competition?

Phrased yet differently: Every single subscription to scholarly journals can be seen as an anti-competitive act that keeps a new business model that allows for competitive bidding from emerging. Shouldn’t there be some legal pushback against this perpetuation of tax-waste?

UPDATE – an analogy due to online questions:

Suppose University of XYZ needed all their windows cleaned. For some historical reason, faculty decided to all sign over their rights to access their windows to any company of their choosing, such that no other company could come and clean them. Afterwards, the university had to pay outrageous fees for the various cleaners chosen by faculty, because only they had the rights to clean the particular windows the faculty had given to them. You could only get Window X cleaned by Cleaner Y. This is analogous to how we currently publish scholarly works. Shouldn’t we instead keep the rights to our works and have ‘publishers’ compete for our business?

Like this:

Like Loading...
Posted on December 6, 2016 at 14:38 8 Comments
Nov13

Do flies in groups make individual choices?

In: own data • Tags: decision-making, Drosophila, photopreference

This is our first poster at this year’s SfN meeting in San Diego. It’s about decision-making in fruit flies. We find a probabilistic form of decision-making that suggests that without understanding the mechanisms behind this fundamental uncertainty, we will never fully understand decision-making.

gorostizasfn2016Clicking on the image will let you download the PDF Version of the poster. This is our abstract:

In every behavioral population paradigm where groups of animals are being exposed to forced-choice situations, there is the question whether or not the individual animals can be assumed to make their own choices. We approach this hypothesis by testing Drosophila fruit flies for their photopreference in a light/dark T-maze. Approximately 75% of a randomly chosen group of wild type flies decide to approach the bright arm of the T-Maze, while the remaining 25% walk into the dark tube. Taking these subgroups of flies and re-testing them revealed a similar 75-25 distribution in each subgroup.
In order to increase the number of choices each subgroup makes without losing too many flies in the process, we used the classic phototaxis experiment developed by Seymour Benzer in the 1960s. In this experiment, flies are exposed to a light source while they are confined in transparent tubes aligned with the direction of light. Each round of the experiment consists of 5 consecutive choices were the animal can either stay or walk towards the light (positive phototaxis). At the end of a round the original group is split into 6 subgroups according to their sequence of choices.
We discovered that while the test/re-test distributions were similar, there was a tendency of the extremely phototactic animals (positive and negative) to skew their distributions towards their respective end.
These results are consistent with observations in single-animals where individual choice probability was discovered to be itself distributed over a population of flies (Kain et al., 2012). To test for potentially confounding effects of general activity and walking speed, we tested individual flies after their phototaxis experiments in Buridan’s Paradigm, where flies walk between two opposing black stripes. We detected small walking speed and general activity differences, suggesting a quantitative interaction between general and light-specific processes contributing to the performance scores in Benzer’s phototaxis experiment.
Kain JS, Stokes C, de Bivort BL (2012) Phototactic personality in fruit flies and its suppression by serotonin and white. Proc Natl Acad Sci U S A 109:19834–19839.

Like this:

Like Loading...
Posted on November 13, 2016 at 17:46 Comments Off on Do flies in groups make individual choices?
Sep29

Practical roads to infrastructure reform

In: science politics • Tags: infrastructure, publishing, reforms, subscriptions

A recurrent topic among faculty and librarians interested in infrastructure reform is the question of whose turn it is to make the next move. Researchers rightfully argue that they cannot submit their work exclusively to modern, open access journals because that would risk their and their employees’ jobs. Librarians, equally correctly, argue that they would cancel subscriptions if faculty wouldn’t complain about ensuing access issues. And so the discussions usually keep turning around and around, not getting anywhere.

Both groups are correct, of course, and so it boils down to which of the two obstacles is easier to overcome.

As tenured faculty myself, the only way to solve the problem on our side is to remove or at least decrease the pressure to publish in subscription journals. Many have realized this, for instance there is DORA, the REF explicitly excludes impact factors for evaluation and my colleagues and I have published a review of how top-ranked journals publish the least reliable science. I would argue that we have taken all the steps we currently can reasonably take. However, there is still a long and tedious road ahead of us where many important factors are out of our control. For one, the hypercompetition for positions and funding, due, in large part, to allocation of funds on a project rather than ongoing basis, leading to an overproduction of early career scientists and cash-strapped labs, is largely beyond faculty control. Second, even if public statements eventually were to manage to effectively devalue journal rank, in an atmosphere of hypercompetition, the obvious risk-averse strategy is to keep assuming that journal rank is instead used on an informal basis: having a paper in a top journal would never hurt, while one in a ‘lesser’ journal could. Hence, as long as there are journals and hypercompetition, faculty will continue to risk their own jobs, their employees’ jobs or the lights going out in their labs if they refuse to submit to subscription journals.

In brief, the obstacles for faculty radically changing their submission strategy are a) human nature, b) government-induced hypercompetition and c) the existence of journals.

While all three obstacles can (and perhaps should) be overcome in principle, there is probably a decent argument to be made that c) may be easier to master than either a) or b). Which brings us to the obstacles for eliminating journals by simply starving them of money. As all agree that paywalls have to go, subscriptions will have to run out at some point. The discussion is no longer about “if”, it’s about “when”. As it appears, for the above reasons, unlikely that scholars will substantially change their ways within my lifetime, I would argue that right now is the time to let subscriptions run out as the logical next step. I have two reasons for my argument, one is technical and one is logistical.

The technical argument, spelled out in more detail elsewhere is that we now have a whole slew of solutions at our disposal to create something jokingly referred to as “legal Sci-Hub”. Much as the actual Sci-Hub, it would provide convenient but patchy access to the scholarly literature, only this version would be legal (with, as now, actual Sci-Hub filling the patches not covered by legal Sci-Hub). As the current subscription-based system is also patchy, but often inconvenient, there is little reason for faculty to complain if they instead get something similarly patchy but maybe more convenient. Moreover, as the change entails an upgrade that comes with massive savings, faculty need not even be involved in the decision, as hardly any of them would know any of the involved technicalities anyway – faculty hardly know the publishers behind their journals, let alone what a link-resolver, DOAI or LOCKSS is. The only thing required is, of course, to inform them that there are a few upgrades running behind the scenes and that they are asked to pardon the dust for a while. In the same letter, faculty would also be informed that the upgrades come with such savings, that they will receive a whole host of new services that such a modern infrastructure will enable.

The logistical argument involves looking at the number of people who have to be convinced to act. If researchers are asked to act next, this would entail convincing just under 8 million researchers at ~10,000 institutions to risk their jobs. What are the number of infrastructure experts who need to be convinced that now is the time to bring their institutional infrastructure from the 20th into the 21st century? A large fraction of subscriptions are by now bundled up in so-called “Big Deals“. These Big Deals, in turn, are often negotiated by regional, national or supra-national consortia. These consortia constitute existing collaborations between institutions in order to solve an infrastructure problem: paying for increasingly overpriced subscriptions. There are currently about 200 of these consortia organized in the “International Coalition of Library Consortia“. Hence, one would have to convince the leading individuals at only a few hundred organizations that now is the time to shift their infrastructure from subscription-based solutions to modern solutions. Clearly, this would not cover all subscriptions, but it would be so widespread, that every institution on the planet would contribute in some way or another and hence all institutions would at least receive notice of the upgrade. Starting with these consortial Big Deals should bring in several billions (US$ or €) annually, which would easily cover implementing the more advanced components one would expect from a modern scholarly infrastructure during this transition period. Any individual researcher who would still like to read a subscription-based journal (who would submit to these journals to make them worth reading?) can still subscribe with their own funds, just as they can still write their manuscripts on type-writers or dictate them to their secretary, if they so choose. Subscriptions simply do not belong to the canon of solutions any more, that tax-funded public institutions can afford to subsidize.

Condensing these estimates to an (admittedly very simplified, but rather pointed) dichotomy, asking researchers to  submit exclusively to OA journals entails convincing just under 8M individuals at ~10k institutions to risk their jobs, while asking librarians to let subscriptions run out entails asking maybe 1k individuals at ~200 organizations to do their job.

While I wouldn’t dare to imply that any of these two options were trivial or even just easy, from my faculty perspective, the second option seems to be the technically and logistically less difficult one.

This insight of course does not entail shifting responsibility for change away from me personally or any of my colleagues, towards librarians. Librarians and faculty are fundamentally on the same side in this issue and share the same interests in a scholarly commons. As such, there is no way around symbiotic cooperation if we strive to overcome the significant social hurdles set up by a self-reinforcing system towards a major shift in the overall research culture. We all need to change and adjust and here I have laid out just a few of many reasons why libraries are in a unique position to now take the second step – after faculty have started to boycott publishers like Elsevier, created OA journals, are submitting more and more of their manuscripts to them, created preprint servers and are constantly increasing their preprint-deposition. At this point, there is very little further action I can see faculty can take. Librarians have already embraced the open access movement many years ago by creating repositories at almost every institution, a vital component of a modern infrastructure. Librarians have a many centuries-long tradition of providing services for faculty. Now is the time for maybe their most critical service yet, completing the modernization of our scholarly infrastructure, a necessary next step to halt further commodization of scholarship and to keep working towards a scholarly commons.

Like this:

Like Loading...
Posted on September 29, 2016 at 11:03 11 Comments
  • Page 8 of 21
  • « First
  • «
  • 6
  • 7
  • 8
  • 9
  • 10
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,715 views)
  • Sci-Hub as necessary, effective civil disobedience (22,933 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,437 views)
  • Booming university administrations (12,901 views)
  • What should a modern scientific infrastructure look like? (11,433 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d