bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 26 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 138 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 440 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 701 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1887 downloads 0.00 KB
Download
Sep17

So many symptoms, only one disease: a public good in private hands

In: science politics • Tags: collective action, journal rank, publishing

Science has infected itself (voluntarily!) with a life-threatening parasite. It has  given away its crown jewels, the scientific knowledge contained in the scholarly archives, to entities with orthogonal interests: corporate publishers whose fiduciary duty is not knowledge dissemination or scholarly communication, but profit maximization. After a 350-year incubation time, the parasite has taken over the communication centers and drained them of their energy, leading to a number of different symptoms. Symptoms for which scientists and activists have come up with sometimes quite bizarre treatments:

  • In the recent #WikiGate, it is questioned if the open encyclopedia Wikipedia should link to (“advertise”) paywalled scientific knowledge at academic publishers such as Elsevier. One argument goes that if Wikipedia articles lack paywalled content and explicitly mention this, pressure on publishers to open the scholarly archives would increase. To solve this issue, open access advocates are now asking Wikipedia editors, who recently received free access to Elsevier’s archives, to assist academic publishers in keeping the paywalled content locked away from the public by not including it in Wikipedia.
  • The Hague Declaration on ContentMining asks for “legal clarity” with regards to science being done on scientific content: access and re-use of scholarly material via software-based research methods is restricted and heavily regulated by academic publishers, leveraging their extensive copyrights over the archives. The Liber open access initiative is now lobbying EU politicians for a “research exception” in international copyright laws to allow unrestricted ContentMining.
  • In recent decades, the number of researchers has been growing such that competition for publications in the few top-ranked journals has reached epic proportions. As a consequence, the amount of work (measured by figure panels or by numbers of authors per article) going into each individual paper has skyrocketed. This entails that the pace of dissemination for each project has been slowing down, not because of any technical or scientific reasons, but merely because of career decisions of scientists. To counteract this trend, it has been suggested to follow the example of physicists, and increase the work-load of scientists: once to publish their results quickly in a readily accessible repository for scholarly communication and once, later, to eventually lock the research behind a paywall in a little-read scholarly top-journal for career advancement.
  • These coveted top-rank journals also publish the least reliable science. However, it’s precisely the rare slots in these journals which eventually help the scientist secure a position as a PI (that’s the whole idea behind all the extra work in the previous example). This entails that for the last few decades, science has preferentially employed the scientists that produce the least reliable science. Perhaps not too surprisingly, we are now faced with a reproducibility crisis in science, with a concomitant exponential rise in retractions. Perhaps equally unsurprisingly, scientists reflexively sprung into action by starting research projects to first understand the size and scope of this symptom, before treating it. So now there exist several reproducibility initiatives in various fields in which scientists dedicate time, effort and research funds to find out if immediate action is necessary, or if corporate publishers can drain the public teat a little longer.
  • Already long before the magnitude of the disease and the number and spread of symptoms had become public knowledge, scientists have come up with two treatments to the symptom of lacking access to scientific knowledge: green and gold open access. Similar to the treatment of slowed down scientific reporting, green open access entails increasing researchers’ overhead by adding scholarly communication as a task on top of career advancement. As it is quite obvious what a scientist will have to choose when faced with choosing one of the two tasks due to limited time, green proponents are asking politicians and funders to mandate deposition in green repositories. The other option, the golden road to open access has now been hijacked by publishers as a way to cut paywall costs from their budget but maintain per-article revenue at similar levels, with the potential to double their already obscene profit margins of around 40%. This model of open access thus entails one of the few ways which is set to make everything worse than it already is. Coincidentally and much to everybody’s chagrin, these two parallel attempts have had the peculiar unintended consequence of splintering the reform movement and seemingly endless infighting. Consequently, the last decade has seen a pace of reform that makes plate tectonics look hurried.

I’ll leave it at these five randomly chosen examples, there are probably many more. While I understand and share the good intentions of all involved and applaud and support their effort, dedication, patience and passion, I can’t help but feel utterly depressed and frustrated by how little we have accomplished. Not counting the endless stream of meetings, presentations and workshops where always the same questions and ideas are being rehashed ad nauseam, our solutions essentially encompass three components:

  1. asking politicians, funders and lately even Wikipedia editors to help us clean up the mess we ourselves have caused to begin with
  2. wasting time with unnecessary extra paperwork
  3. wasting time and money with unnecessary extra research

What is it, that keeps us from being ‘radical’ in the best sense of the word? The Latin word ‘radix‘ means ‘root’: we have to tackle the common root of all the problems and that is the fact that knowledge is a public good that belongs to the public, not to for-profit corporations. The archiving and making accessible of this knowledge has become so cheap, that publishers are now not merely unnecessary, on top of the pernicious symptoms described above, they also increase these costs from what currently would amount to approx. US$200m world-wide per year to a whopping US$10b in annual subscription fees.

I’m not the only one, not even the first to propose taking back the public good from the corporations, as well as the US$10b we spend annually to keep it locked away from the public. If we did that, we would only have to spend a tiny fraction (about 2%) of the annual costs we just saved to give the public good back to the public. The remaining US$9.8b are a formidable annual budget to ensure we hire the scientists with the most reliable results.

This plan entails two initial actions: one is to cut subscriptions to regain access to the funds required to implement a modern scholarly infrastructure. The other is to use the existing mechanisms (e.g. LOCKSS) to ensure the back-archives remain accessible for us indefinitely. As many have realized, this is a collective action problem. If properly organized, this will bring the back-archives back into our control and provide us with sufficient leverage and funds to negotiate the terms at which they can be made publicly accessible. Subsequently, using the remaining subscription funds, the scholarly infrastructure will take care of all our scholarly communication needs: we have all the technology, it just needs to be implemented.  After a short transition period, at least in the sciences, publications in top-ranked journals (to which then only individuals subscribe, if any) will be about as irrelevant for promotion and funding as monographs are today.

This plan, if enacted, would save a lot of money, lives, time and effort and cure publicly funded science of a disease that threatens its very existence. I fear continued treatment of the symptoms will lead to the death of the patient. But which steps are required to make this treatment a reality? How can we orchestrate a significant nucleus of institutions to instantiate massive subscription cuts? How can we solve the collective action problem? These are the questions, to which I do not have any good answers.

Like this:

Like Loading...
Posted on September 17, 2015 at 16:41 57 Comments
Jul20

Evidence-resistant science leaders?

In: science politics • Tags: data, evidence, policy, politicians

Last week, I spent two days at a symposium entitled “Governance, Performance & Leadership of Research and Public Organizations“. The meeting gathered professionals from all walks of science and research: economists, psychologists, biologists, epidemiologists, engineers, jurists as well as politicians, university presidents and other leaders of the most respected research organizations in Germany. It was organized by Isabell Welpe, an economist specializing in incentive systems, broadly speaking. She managed to bring some major figures to this meeting, not only from Germany, but notably also John Ioannidis from the USA or Margit Osterloh from Switzerland. The German participants included former DFG president and now Leibniz president Matthias Kleiner (the DFG being the largest funder in Germany and the Leibniz Association consisting of 89 non-university federal research institutes), president of the German Council for Science and the Humanities, Manfred Prenzel, Secretary General of the Max-Planck Society Ludwig Kronthaler, or the president of Munich’s Technical University, Wolfgang Herrmann, only to mention some of them. Essentially, all major research organizations in Germany were represented with at least one of their leading positions, supplemented with expertise from abroad.

All of these people shape the way science will be done in the future either at their universities and institutions, or in Germany or around the world. They are decision-makers with the power to control the work and job situation for tens of thousands of current and future scientists. Hence, they ought to be the most problem-solving oriented, evidence-based individuals we can find. I was shocked to learn that this was an embarrassingly naive assumption.

In my defense, I was not alone in my incredulity, but maybe that only goes to show how insulated scientists are from the political realities. As usual, there were of course gradations between the individuals, but at the same time there seemed to be a discernible grouping in what could be termed the evidence-based camp (scientists and other professionals) and the ideology-based camp (the institutional leaders). With one exception I won’t attribute any of the instances I will recount to any particular individual, as we better focus on the solutions to the more general prohibitive  attitude, rather than on a debate about the individuals’ qualifications.

On the scientific side, the meeting brought together a number of thought leaders detailing how different components of the scientific community perform. For instance, we learned that peer-review is quite capable of weeding out obviously weak research proposals, but in establishing a ranking order among the non-flawed proposals, it is rarely better than chance. We learned that gender and institution biases are rampant in reviewers and that many rankings are devoid of any empirical basis. Essentially, neither peer-review nor metrics perform at the level we expect from them. It became clear that we need to find solutions to the lock-in effect, the Matthew effect and the performance paradox and to some extent what some potential solutions may be. Reassuringly, different people from different fields using data from different disciplines arrived at quite similar conclusions. The emerging picture was clear: we have quite a good empirical grasp of which approaches are and in particular which are not working. Importantly, as a community we have plenty of reasonable and realistic ideas of how to remedy the non-working components. However, whenever a particular piece of evidence was presented, one of the science leaders got up and proclaimed “In my experience, this does not happen” or “I cannot see this bias”, or “I have overseen a good 600 grant reviews in my career and these reviews worked just fine”. Looking back, an all too common scheme of this meeting for me was one of scientists presenting data and evidence, only to be countered by a prominent ex-scientist with a “I disagree without evidence”. It appeared quite obvious that we do not seem to suffer from a lack of insight, but rather from a lack of implementation.

Perhaps the most egregious and hence illustrative example was the behavior of the longest serving university president in Germany, Wolfgang Herrmann, during the final panel discussion (see #gplr on Twitter for pictures and live comments). This will be the one exception to the rule of not mentioning individuals. Herrmann was the first to talk and literally his first sentence was to emphasize that the most important objective for a university must be to get rid of the mediocre, incompetent and ignorant staff. He obviously did not include himself in that group, but made clear that he knew how to tell who should be classified as such. When asked which advice he would give university presidents, he replied by saying that they ought to rule autocratically, ideally by using ‘participation’ as a means of appeasing the underlings (he mentioned students and faculty), as most faculty were unfit for democracy anyway. Throughout the panel, Herrmann continually commended the German Excellence Initiative, in particular for a ‘raised international visibility’ (whatever that means), or ‘breaking up old structures’ (no idea). When I confronted him with the cold hard data that the only aspects of universities that showed any advantage from the initiative were their administrations and then asked why that didn’t show that the initiative had, in fact, failed spectacularly, his reply was: “I don’t think I need to answer that question”. In essence, this reply in particular and the repeated evidence-resistant attitude in general dismissed the entire symposium as a futile exercise of the ‘reality-based community‘, while the big leaders were out there creating the reality for the underlings to evaluate, study and measure.

Such behaviors are not surprising when we hear them from politicians, but from (ex-)scientists? At the first incidence or two, I still thought I had misheard or misunderstood – after all, there was little discernible reaction from the audience. Later I found out that not only I was shocked. After the conference, some attendees discussed several questions: Can years of leading a scientific institution really make you so completely impervious to evidence? Do such positions of power necessarily wipe out all scientific thinking, or wasn’t all that much of it there to begin with? Do we select for evidence-resistant science leaders or is being/becoming evidence-resistant in some way a prerequisite for striving for such a position? What if these ex-scientists have always had this nonchalant attitude towards data? Should we scrutinize their old work more closely for questionable research practices?

While for me personally such behavior would clearly and unambiguously disqualify the individual from any leading position, relieving these individuals from their responsibilities is probably not the best solution. Judging from the meeting last week, there are simply too many of them. Instead, it emerged from an informal discussion after the end of the symposium, that a more promising approach may be a different meeting format: one where the leaders aren’t propped up for target practice, but included in a cooperative format, where admitting that some things are in need of improvement does not lead to any loss of face. Clearly, the evidence and the data need to instruct policy. If decision-makers will be ignoring the outcomes of empirical research on the way we do science, we might as well drop all efforts to collect the evidence.

Apparently, this was the first such conference on a national level in Germany. If we can’t find a way for the data presented there to have a tangible consequence on science policy, it may well have been the last. Is this a phenomenon people observe in other countries as well, and if so, how are they trying to solve it?

Like this:

Like Loading...
Posted on July 20, 2015 at 21:50 17 Comments
Jun23

Whither now, Open Access?

In: science politics • Tags: infrastructure, open access

The recently discussed scenario of universal gold open access brought about by simply switching the subscriptions funds at libraries to have the libraries pay for author processing charges instead, seemed like a ghoulish nightmare. One of the few scenarios worse than the desolate state we call the status quo today. The latest news, however, seem to indicate that the corporate publishers are planning to shift the situation towards a reality that is even worse than that nightmare. Not only are publishers, as predicted, increasing their profits by plundering the public coffers to an even larger extent (which would be bad enough by itself), they are now also attempting to take over the institutional repositories that have grown over the last decade. If successful, this would undo much of the emancipation we have wrought from the publisher oligopoly. This move can only be intended to assure that our crown jewels stay with the publishers, rather than where they belong, in our institutions. Apparently, some libraries are all too eager to get rid of their primary reason d’être: to archive and make accessible the works of their faculty.

Publisher behavior over the last decade has been nothing short of a huge disappointment at best and an outright insult at worst. I cannot fathom a single reason why we should let corporate publishers continue to parasitize our labor. If even the supposedly good guys can be seen as not acting in our best interest, what are we left with? How can we ever entrust our most valuable assets to organizations that have proven time and again that they will abuse our trust for profit? Why is there still a single scientist left, with the opinion that “the current publishing model works well”, let alone a plurality?

These recent developments re-emphasize that none of our current approaches to solve the access problem (gold, green or hybrid) are sustainable by themselves. It is in our own best interest (and hence the tax-payers’ who fund us) to put publishers out of business for good. If we strive for our institutions and hence us to regain and stay in control of our own works, be that the code we develop, the data we collect or the text summaries we write, then we need a new approach and that is to cut subscriptions on a massive scale in order to free the funds to implement a modern scholarly infrastructure. This infrastructure will not only solve the access problem that most people care so much about, but simultaneously ameliorate the counter-productive incentives currently in place and help improve the replication crisis.

I do not think it is reasonable to try to solve the access problem at the expense of all the other, numerous and potentially more pernicious shortcomings of our current infrastructure, even though there is a lot of momentum on the open access front these days. Why not take this momentum and use it to rationally transform the way we do science, taking all modern technology at our disposal, with the added benefit of also solving the access problem along the way? The result of blindly, frantically doctoring on one single symptom, ignoring the disease that is still festering, is all too likely the death of the patient.

tl;dr: Cut all subscriptions now!

Like this:

Like Loading...
Posted on June 23, 2015 at 12:52 17 Comments
Jun19

What happens to publishers that don’t maximize their profit?

In: science politics • Tags: open access, publishing

Lately, there has been some public dreaming going on about how one could just switch to open access publishing by converting subscription funds to author processing charges (APCs) and we’d have universal open access and the whole world would rejoice. Given that current average APCs have been found to be somewhat lower than current subscription costs (approx. US$3k vs. US$5k) per article, such a switch, at first, would have not one but two benefits: reduced overall publishing costs to the taxpayer/institution and full access to all scholarly literature for everyone. Who could possibly complain about that? Clearly, such a switch would be a win-win situation at least in the short term.

However, what would happen in the mid- to long-term? As nobody can foresee the future with any degree of accuracy, one way of projecting future developments is to look at past developments. The intent of the switch is to use library funds to cover APC charges for all published articles. This is a situation we have already had before. This is what happens when you allow publishers to negotiate prices with our librarians – hyperinflation:

Given this publisher track record, I think it is quite reasonable to remain somewhat skeptical that in the hypothetical future scenario of the librarian negotiating APCs with publishers, the publisher-librarian partnership will not again be lopsided in the publishers’ favor.

I’m not an economist, so I’d be delighted if there were one among the three people who read this blog (hi mom!), who might be able to answer the questions I have.

The major players in academic publishers are almost exclusively major international corporations: Elsevier, Springer, Wiley, Taylor and Francis, etc. As I understand it, it is their fiduciary duty to maximize the value for their shareholders, i.e., profit? So while the currently paid APCs per article (about US$3k) seem comparatively cheap (i.e., compared to currently US$5k for each subscription article), publishers would not be offering them, if that would entail a drop in their profit margins, which currently are on the order of 40%. As speculated before, a large component of current publisher revenue (of about US$10bn annually) appears to be spent on making sure nobody actually reads the articles we write (i.e., paywalls). This probably explains why the legacy subscription publishers today, despite receiving all their raw material for free and getting their quality control (peer-review) also done for free, still only post profit margins under 50%. Given that many non-profit open access organizations post actual publishing costs of under US$100, it is hard to imagine what else other than paywall infrastructure would cost that much, given that the main difference between these journals are the paywalls and not much else. By the way, precisely because the actual publishing process is so cheap, the majority of all open access journals do not even bother to charge any APCs at all. There is something beyond profits that makes subscription access so expensive and any OA scenario would make these costs disappear.

So let’s takes the quoted US$3k as a ballpark average for future APCs on a world-wide scale. That would mean institutional costs would drop from the current US$10bn to US$6bn annually world wide. Let’s also assume a generous US$300 of actual publishing costs per article, which is considerably more than current costs with arXiv (US$9) or SciELO (US$70-200) or current median APCs (US$0). If this switch would happen unopposed, the publishers would have increased their profit margin from ~40% to around 90% and saved the tax payer a pretty penny. So publishers, scientists and the public should be happy, shouldn’t they?

Taking the perspective of a publisher, this scenario also entails that the publishers have wasted around US$4bn in potential profits. After all, today’s figures show that the market is worth US$10bn even when nobody but a few libraries have access to the scholarly literature. In the future scenario, everyone has access. Undoubtedly, this will be hailed as great progress by everyone. After all, this is being used as the major reason for performing this switch right now. Obviously, increased profit margins from 40% to 90% is seen as a small price to pay for open access, isn’t it? Wouldn’t it be the fiduciary duty of corporate publishers to regain the lost US$4bn? After all, why should they receive less money for a better service? Obviously, neither their customers (we scientists and our librarians), nor the public minded an increase in profit from 40% to 90%. Why should they oppose an increase from 90% to 95% or to 99.9%? After all, if a lesser service (subscription) was able to extract US$10bn, shouldn’t a better service (open access) be able to extract 12 or 15bn from the public purse?

One might argue that this forecast is absurd, the journals compete with each other for authors! This argument forgets that we are not free to chose where we publish: only publications in high-ranking journals will secure your job in science. These journals are the most selective of all journals. In the extreme cases, they only publish 8% of all submitted articles. This is an expensive practice as even the rejected articles generate some costs. These journals are on record that they would have to charge around US$50,000 per article in APCs to maintain current profits. It is hence not surprising that also among open access journals, APCs correlate with their standing in the rankings and hence their selectivity:

It is reasonable to assume that authors in the future scenario will do the same they are doing now: compete not for the most non-selective journals (i.e., the cheapest), but for the most selective ones (i.e., the most expensive). Why should that change, only because now everybody is free to read the articles? The new publishing model would even exacerbate this pernicious tendency, rather then mitigate it. After all, it is already (wrongly) perceived that the selective journals publish the best science. If APCs become predictors of selectivity because selectivity is expensive, nobody will want to publish in a journal without or with low APCs, as this will carry the stigma of not being able to get published in the expensive/selective journals.

This, to me as a non-economist, seems to mirror the dynamics of any other market: the Tata is no competition for the Rolls Royce, not even the potential competition by Lamborghini is bringing down the prices of a Ferrari to that of a Tata, nor is Moët et Chandon bringing down the prices of Dom Perginon. On the contrary, in a world where only Rolls Royce and Dom Perignon count, publications in journals on the Tata or even the Moët et Chandon level will only be ignored. Moreover, if libraries keep paying the APCs, the ones who so desperately want the Rolls Royce don’t even have to pay the bill. Doesn’t this mean that any publisher who does not shoot for at least US$5k in their average APCs (better more) fails to fulfill their fiduciary duty in not one but two ways: not only will they lose out on potential profit, due to their low APCs, they will also lose market share and prestige. Thus, in this new scenario, if anything, the incentives for price hikes across the board are even higher than what they are today. Isn’t this scenario a perfect storm for runaway hyperinflation? Do unregulated markets without a luxury segment even exist?

One might then fall back on the argument that at least Fiat will compete with Peugeot for APCs, but that forgets that a physicist cannot publish their work in a biology journal. Then one might argue that mega-journals publish all research, but given the constant consolidation processes in unregulated markets (which is alive and well also in the publishing market as was just reported), there quickly won’t be many of these around any more such they are, again, free to increase prices. No matter how I try to turn the arguments around, I only see incentives for price hikes that will render the new system just as unsustainable as the current one, only worse: failure to pay leads to a failure to make your discovery public and no #icanhazpdf can mitigate that. Again, as before, this kind of scenario can only be worse than what we have now.

tl:dr: The incentives for price hikes in a universal gold open access economy will be even stronger than they are today.

Like this:

Like Loading...
Posted on June 19, 2015 at 14:19 38 Comments
Jun18

Are more retractions due to more scrutiny?

In: science politics • Tags: fraud, impact factor, journal rank, methodology, retractions

In the last “Science Weekly” podcast from the Guardian, the topic was retractions.  At about 20:29 into the episode, Hannah Devlin asked, whether the reason ‘top’ journals retract more articles may be because of increased scrutiny there.

The underlying assumption is very reasonable, as many more eyes see each paper in such journals and the motivation to shoot down such high-profile papers might also be higher. However, the question has actually been addressed in the scientific literature and the data don’t seem to support this assumption. For one, this figure shows that there are a lot of retractions from lower ranking journals, but the journals who retract a lot are few and far between. In fact, there are many more retractions in low-ranking journals than in high-ranking ones. Of the high-ranking journals, a much larger proportion also retracts many papers. However, this analysis only shows that there are many more retractions in lower journals than in higher journals on an absolute level. Hence, these data are not conclusive, but suggestive that scrutiny is not really all that much higher for the ‘top’ journals than anywhere else.

Another reason why scrutiny might be assumed to be higher in ‘top’ journals is that readership is higher, leading to more potential for error detection. However, the same reasoning can be applied to citations, and not only retractions. Moreover, citing a ‘top’ paper is not only easier than forcing a retraction, it also benefits your own research by elevating the perceived importance of your field. Thus, if readership had any such influence, one would expect journal rank to correlate better with citations than with retractions. The opposite is the case: The coefficient of determination for citations with journal rank currently lies around 0.2, while that coefficient comes to lie at just under 0.8 for retractions and journal rank (Fig. 3 and Fig. 1D, respectively, here). So while there may be a small effect of scrutiny/motivation, the evidence seems to suggest that it is a relatively minor effect, if there is one at all.

Conversely, there is quite solid evidence that the methodology in ‘top’ journals is not any better than in other journals, when analyzing non-retracted articles. In fact, there are studies showing that the methodology is actually worse in ‘top’ journals, while we have not found a single study suggesting the methodology gets better with journal rank. Our article reviews these studies. Importantly, these studies all concern non-retracted papers, i.e., the other 99.95% of the literature.

In conclusion, the evidence suggests scrutiny is likely a negligible factor in the correlation of journal rank and retractions, while increased incidence of fraud and lower methodological standards can be shown.

I know Ivan Oransky, who was a guest on the show, is aware of these data, so it may have been a bit unfortunate that Phillip Campbell (editor-in-chief at Nature Magazine) got to answer this question before Ivan had a chance to lay these data out. In fact, Nature is also aware of these data and has twice refused publishing them. The first time when we submitted our manuscript, with the statement, that Nature had already published articles that stated that Nature publishes the worst science. The second time was when Cori Lok interviewed Jon Tennant and he told her about the data, but Cori failed to include this part of the interview. There is thus a record of Nature, very understandably, avoiding to admit their failure to select for solid science. Phillip Campbell’s answer to the question in the podcast may have been at least the third time.

While Phillip Campbell did admit they don’t do enough fraud-detection (it is too rare), the issue of reliability in science goes far beyond fraud, so successfully derailing the question towards this direction served his company quite well. Clearly, he’s a clever guy and did not come unprepared.

Finally, one may ask: why do the ‘top’ journals publish unreliable science?

Probably the most important factor is that they attract “too good to be true” results, but only apply “peer-review light”: rejection rates drop dramatically from 92% to a mere 60% once your manuscript makes it past the editors, that’s a 5-fold increase in your publication chances (Noah Gray and Henry Gee, pers. comm.). Why is that so? First, the reviewers know the editor wants to publish this paper. Second, they have an automatic conflict of interest, as a Nature paper in their field increases the visibility of their field, they may even be cited in the paper – or plan to cite it in their upcoming grant application.

On average, this entire model is just a recipe for disaster and more policing won’t fix it. By using it, we have been setting us up for the exponential rise in retractions to be seen in Fig. 1a of our paper.

So, in the probably not too unlikely case that the topic of unreliable science should come up again, anyone can now cite the actual, peer-reviewed data we have at hand, such that editors-in-chief may have a harder time derailing the discussion and obfuscating the issues in the future.

tl;dr: The data suggest a combination of three factors leading to more retractions in ‘top’ journals: 1. Worse methodological quality; 2. Higher incidence of fraud 3. Peer-review light. One would intuitively expect increased readership/scrutiny to play some role, but there is currently no evidence for it and some circumstantial evidence against it.

Like this:

Like Loading...
Posted on June 18, 2015 at 14:38 6 Comments
Jun11

What goes into making a scientific manuscript public?

In: science politics • Tags: publishing

Currently, our libraries are paying about US$5000 per peer-reviewed subscription article. What is more difficult to find out is where all that money goes. Which steps are required to make an accepted manuscript public? Because of their high-throughput (about 1200 journals with a total of about half a million published articles), low-cost, open access publishing model, I’ve contacted SciELO and asked them how they achieve such low costs – figures that range below US$100 per article, a fraction of commercial publishers. Abel Packer, one of the founders of SciELO, was so kind to answer all my questions.

SciELO receives most of its articles from the participating journals in JATS XML and PDF. It takes that version and publishes it online, makes sure it is indexed in the relevant places (PubMed, Web of Science, etc.) and archives it for long-term accessibility. These services cost about US$67 (which are covered by the participating governments, not the authors). For other digital services such as a DOI, plagiarism checkers, altmetrics, etc. another US$4 are incurred. So bare-necessities-publishing costs are just over US$70 per article at SciELO.

However, this comparison is not quite fair, as only few publishers receive their manuscripts in XML. So for those journals, which do not have an authoring environment such as PeerJ‘s “Paper Now” in which you can write your paper and submit it in the right format, there will be costs associated with editors who handle manuscript submissions, peer-review, as well as generating all kinds of formats (XML, PDF, ePub, etc., including proofs going back and forth between authors and staff) from the submitted files (LaTex, Word, etc.). At SciELO (and their participating journals), these services, if chosen by the authors, average around another US$130. Taken together, the complete package from, say, MS Word submission to published article can thus be had for a grand total of just over US$200. However, if/once authors use modern authoring systems, where they write collaboratively on a manuscript that is formatted in, e.g., native XML, then publication costs drop to below US$80. On the other hand, if SciELO authors opt for English language services, submission services, an enhanced PDF version, a social media campaign, and/or data management services – all offered by SciELO for a fee – a cozy all-inclusive package will cost them almost US$600, but still a far cry from the 5k commercial publishers charge for their subscription articles.

If even the most comfortable publishing option with all the bells and whistles can be had for just under US$600, why do current publishers succeed in persuading authors and institutions to pay author processing charges (APCs) averaging around €2000? There is an easy answer to that: currently, each subscription article generates 5k in revenue for the publisher. as such, publishers will strive to approach this figure with their APCs to fight a drop in their revenues. If that is the case, one might ask, why the average figures are not closer to 5k? One reason may indeed by competition by new publishers offering APCs dramatically below 5k. However, I think there may also be another/additional reason: The numbers above appear to corroborate a conclusion from last July, that subscription paywalls may indeed incur a cost in the neighborhood of around US$3000 per article. Dropping these 3k in paywall costs from per article revenue targets of 5k, leads to approximately the average APC which these publishers were able to charge from the institutions studied in the Schimmer et al. white paper. In such a scenario, publishers would keep the per-article-profit of just under 2k roughly constant.

This then means, of course, that the only thing the proposed switch from subscriptions to APCs would do is increase the profit margins of corporate publishers from currently just shy of 40% to about 90% – any publisher’s wet dream. As I’ve outlined before, this is probably the only way to make the abysmal status quo even worse, as it wouldn’t fix any of the other problems we have, besides access, and would exclude the scholarly poor from publishing in the venues that secure a job. Unregulated, universal gold open access has to be avoided by any means necessary.

Like this:

Like Loading...
Posted on June 11, 2015 at 14:34 13 Comments
May09

Is this supposed to be the best Elsevier can muster?

In: science politics • Tags: copyright, Elsevier, publishing

Until today, I was quite proud of myself for not caving in to SIWOTI syndrome like Mike Taylor did. And then I read his post and caved in as well.

What gets us so riled up is Elsevier’s latest in a long list of demonstration of what they think of the intellectual capacities of their customers. It’s precisely because it is only one in a long row that I initially didn’t feel like commenting. However, there were so many points in this article worth rebutting and Mike only selected a few for his comment, I felt compelled to pick some of my own for dissection.

This combination of perspectives has led me, I believe, to a deeper understanding of the importance and limitations of copyright law

Great! An expert on copyright law and a creative mind, Shirley a worthy opponent for a mere scientist who understands next to nothing of copyright. As we say in Germany: many foes, much honor (I know, don’t ask!).

The STM journal publishing sector is constantly adjusting to find the right balance between researcher needs and the journal business model, as refracted through copyright.

I think that’s an accurate assessment, the right balance of course being to continuously expand copyright to rob scientist authors of any rights to their articles and allowing publishers to charge authors for every time they use their own figures in class: the researcher needs to use their figures in teaching and the journal business model needs to beat drug smuggling, arms trade and human trafficking in profit margins. Thus, the right balance from this perspective is to charge institutions for every additional use of the material they already paid for twice, absolutely. However, while maximizing profits may be the fiduciary duty of the corporate publisher, neither science nor society cares about the existence of corporations. On the contrary, openly parasitic corporations will be fought, so it’s difficult to see how alluding to the parasitic business model of his business (rather than, e.g., trying to hide it) is in the interest of the author. One probably has to have the creative mind of a poet and the honed analytic skills of a lawyer to see the advantage here.

Authors are looking for visibility and want to share their results quickly with their colleagues and others in their institutions or communities.

That’s also correct: it’s the reason there is a boycott of Elsevier and the open access movement exist. It’s not clear why the author of this article is using an argument against copyright in particular and the entire status quo in academic publishing in general in an article purporting to support copyright. Again, I’m probably neither creative enough nor versed in law well enough to understand the apparent own goals of this author.

Most journals have a long tradition of supporting pre-print posting and enabling “scholarly sharing” by authors.

I’m sure some journals have that tradition, but Elsevier’s journals are not among them. On digital ‘paper’ of course, Elsevier supports pre-prints and ‘green’ archiving, but if that isn’t just lip service, why pay two US lawmakers US$40,000 to make precisely this “scholarly sharing” (note the scare quotes!) illegal? Or is the author insinuating that the legal counsel of Elsevier had no role in drafting the RWA?

In fact, last week Elsevier released its own updated sharing policies

Wait a minute – Elsevier has released a set of policies which specify how scientists are allowed to talk about their research? How on earth is this supposed to convince anyone that copyright is good for science and scientists if a scientist first has to ask the approval of a commercial publisher before they start talking about their research? I went and read these policies; they essentially say: “break our copyright by tweeting your own article and we’ll sue you”. I guess I really lack the creativity and expertise to understand how this is in support of copyright.

I believe that copyright is fundamental to creativity and innovation because without it the mechanisms for funding and financial support for authors and publishing become overly dependent on societal largesse.

Given my lack of poetry and legal competence, I really have to think hard now. He’s writing that we as scientist authors shouldn’t be dependent on “societal largesse”. Science is, for the most part, a public enterprise. This means my salary (without any copyright) is paid by “societal largesse”. 80% of subscription income of Elsevier is from public institutions, so this author suckles 80% of his income from the teet of “societal largesse”. So he’s arguing that copyright helped prevent his own salary from going 100% societal? Or is he arguing that I should lose all my salary? If depending on “societal largesse” really is to be avoided, why doesn’t he give 80% of his salary (which is probably more money than 100% of my salary) back to the tax payer, perhaps by contributing to the open access funds of a public institution of his choice? Going after the salaries of your customers without displaying any willingness to give up your own dependence on “societal largesse” must be a strategy that requires a lot of creativity and legal competence, as from my unimaginative and incompetent perspective that strategy just backfired mightily.

The alternatives to a copyright-based market for published works and other creative works are based on near-medieval concepts of patronage, government subsidy,

“Societal largesse” and “government subsidy” are what looms without copyright? I thought the only thing that kept Elsevier alive was government subsidies enabled by societal largesse? Last I looked, open access publishing would cost something like US$100-200 per article if we implemented it right. Elsevier, on  average, charges about US$5,000 per subscription article. So, on average, for each subscription article, Elsevier is receiving at least $4,800 in government subsidies (which amounts to 96% of the total payment), solely to artificially keep this corporation alive. If the author were so keen on getting rid of government subsidies, why is he asking his customers to support a business practice that only exists because their income is 96% government subsidies? Indeed, I’m neither intelligent nor competent enough to understand this defense of copyright. To me, this article is an attack on the entire status quo.

I’m running out of time and honestly, whatever could come next would be difficult to change the impression I now got from reading thus far. Clearly, Elsevier thinks that their scientist customers are know-nothing hobos with an insufficient number of neurons for a synapse. Either they are correct as I for the life of me cannot find anything in support of copyright in this article, or their big shots suffer from a severe case of Dunning Kruger syndrome.

Like this:

Like Loading...
Posted on May 9, 2015 at 13:21 20 Comments
Apr29

A study justifying waste of tax-funds?

In: science politics • Tags: open access, publishing

Open Access (OA) pioneer and OA journal eLife founding member and sponsor, the Max Planck Society just released a white paper (PDF) analyzing open access costs in various countries and institutions and comparing them to subscription costs. Such studies are fundamental prerequisites for evidence-based policies and informed decisions on how to proceed with bitterly needed reforms. The authors confirm the currently most often cited ballpark figures of a world-wide annual academic publishing volume around US$10b, averaging at around US$5000 for each of the approximately 2 million papers published every year. This confirmation from different sources than are usually cited is very valuable and solidifies our knowledge on the kind of funds available to the system.

The authors detail that various institutions in various countries spend significantly less than the current subscription costs on their current author processing charges (APCs) for publishing in open access journals, around US$2000-3000 per article. They conclude from these data that a conversion from subscription to author-pays model would be at least cost-neutral (if not a significant cost-saver) and keep the publishing industry alive.

I find these statements quite startling for a number of reasons:

  1. Over 15 years ago, the US government (via the NIH) helped Brazil develop an incredibly successful publishing model, SciELO. It has since spread, with many other countries all over the globe joining. In their now roughly 900 journals, SciELO publishes peer-reviewed papers, fully open-access at an average cost of US$90 per article. Recently, these figures have been confirmed with numbers from the NIH’s open access repository PubMedCentral, where such costs come to lie around US$50 per article. Thus, publishing fully open access with all the features known from commercial publishers clocks in at below US$100 per article. This we already knew before this study. Why was there a study needed, that shows that we can also get such universal open access for up to 100 times the price of PMC/SciELO? Is the survival of the publishing industry really worth up to US$9.9b in subsidies every year? What value do publishers add, that could possibly be worth the annual bill of 9.9 billion in virtually any currency?
  2. The authors emphasize that “Whether calculated as mean or median, however, the average APC index will never be dictated by the high-end values.” This may of course be financially relevant for the tax-payer in the short-term, but in the long-term the tax-payer will also be interested in whether the science they fund is reliable: is publicly funded science a good bang for the buck? If we only were to convert to this ‘gold’ OA model and left everything else virtually unchanged, the situation for the reliability and hence credibility of publicly funded science would be even worse than it is today. As outlined in detail elsewhere, high-ranking journals argue that their APCs will come to lie around US$50,000 per article. While this may indeed not change the average cost to the taxpayer with currently in excess of 30,000 journals, it will mean that in addition to knowing the professional editor and, if needed, fake your data, you then also would have to be rich (or work at a rich institution) in order to publish in a venue that helps secure a job in science. Given that these journals publish the least reliable science, this would be the one single scenario I could imagine, that would be even worse for science than the status quo.
  3. The authors also do not mention that the large majority of open access journals (including Max Planck Society’s very own eLife) do not charge any APCs at all (an issue already raised by Peter Suber). It is not clear from the study if articles published in these journals have been counted at all. If not, their costs are overestimating the actual costs by a significant factor.

Thus, as I see it, this is a study that at best serves no real purpose, at worst constitutes a disservice to science by suggesting such a transition would even be desirable, when it clearly is not. I have asked one of the co-authors of the study, Kai Geschuhn to comment on my criticisms. You can find her reply below, I’ll leave it uncommented:

Like it or not, offsetting subscription costs against publication fees still isn’t the common understanding of how to finance open access. With this study, we didn’t want to raise the question whether scientific publishing should cost US$50, US$100 or US$5,000 per article. The aim rather was to show that the transition to open access is feasible already. The figures presented in the paper relate current subscription costs to scientific article outputs on different levels (global, national, and institutional) in order to show that there is enough money in the system to finance all of these articles. While this is obvious to you, it is often not to libraries which usually expect the open access transition to become even more expensive. This misconception is mostly due to the assumption, that the total number of publications from an institution or a country would have to be financed. We suggest calculating with articles from corresponding authors only, which usually leads to a reduction of up to 50% of the total amount.
After ten years of debate, we finally need to agree upon a realizable first step. We believe that offsetting budgets actually is key to this so we have to start the calculation.

Like this:

Like Loading...
Posted on April 29, 2015 at 17:14 17 Comments
Apr27

What should a modern scientific infrastructure look like?

In: science politics • Tags: infrastructure, open science, peer-review, publishing

For ages I have been planning to collect some of the main aspects I would like to see improved in an upgrade to the disaster we so euphemistically call an academic publishing system. In this post I’ll try to briefly sketch some of the main issues, from several different perspectives.

As a reader:

I’d like to get a newspaper each morning that tells me about the latest developments, both in terms of general science news (aka. gossip) as well as inside my scientific fields of interest. For the past 5+ years, my paper.li has been doing a pretty decent job at collecting the gossip, but for the primary literature relevant to my field, such a technology is sorely missing. I’d like to know which papers my colleagues are reading, citing and recommending the most. Such a service would also learn from what I click on, what I recommend and what I cite, to assist me in my choices. Some of these aspects are starting to be addressed by companies such as F1000 or Google Scholar, but there is no comprehensive service that covers all the literature with all the bells and whistles in a single place. We have started to address this by developing an open source RSS reader (a feedly clone) with a plug-in functionality to allow for all the different features, but development has halted there for a while now. So far, the alpha version can sort and filter feeds according to certain keywords and display a page with the most tweeted links, so it’s already better than feedly in that respect, but it is still alpha software. All of the functionalities I want, have already been developed somewhere, so we’d only need to leverage them for the scientific literature.

In such a learning service, it would also be of lesser importance if work was traditionally peer-reviewed or not: I can simply adjust for which areas I’d like to only see peer-reviewed research and which publications are close enough that I want to see them before peer-review – I might want to review them myself. In this case, peer-review is as important as I, as a reader, want to make it. Further diminishing the role of traditional peer-review are additional layers of  selection and filtering I can implement. For instance, I would be able to select fields where I only want recommended literature to be shown, or cited literature, or only reviews, not primary research. And so forth, there would be many layers of filtering/sorting which I could use flexibly to only see relevant research for breakfast.

I admit it, I’m a fan of Lens. This is an excellent example of how scientific content should be displayed on a screen. With a modern infrastructure, we get to choose which way we would like to read, Lens would not be the only option besides emulating paper. Especially when thorough reading and critical thinking are required, such as during the review of manuscripts or grant proposals, ease of reading and navigating the document is key to an efficient review process. In the environment we already should have today, reviewers would be able to pick the for them most efficient way of thoroughly fine-combing a document.

We would also be able to click on “experiments were performed as previously described” and then directly read the exact descriptions of how these experiments were done, because we would have finally have implemented a technology from 1968, hyperlinks. Fully implementing hyperlinks would also provide the possibility to use annotations to the literature: such annotations, placed while reading, can later be used as anchors for citations. Obviously, we’d be using a citation-typology in order to make the kind of citation (e.g. affirmative or dismissive, etc.) we intended machine readable.

Of course, I would also be able to double-click on any figure to have a look at other aspects of the data, e.g. different intervals, different intersections, different sub-plots. I’d be able to use the raw data associated with the publication to plot virtually any graph from the data, not just those the authors offer me as a static image, as today. How can this be done? This brings me to the next aspect:

As an author:

As an author, I want my data to be taken care of by my institution: I want to install their client to make sure every piece of data I put on my ‘data’ drive will automatically be placed in a data repository with unique identifiers. The default setting for my repository may be open and a CC0 license, or set manually to any level of secrecy I’m allowed to or intend. The same ought to be a matter of course for the software we write. In today’s day and age, institutions should provide an infrastructure that makes version-controlled software development and publishing seamless and effortless. And yet, we, the scientists, have to ask our funders for money to implement such technology. Likewise for authoring: we need online authoring tools that can handle and version-control documents edited, simultaneously, by multiple authors, including drag and drop reference managing. GDocs have been around for a decade if not more and FidusWriter or Authorea are pioneering this field for scientific writing, but we should already have this at our institutions by default today (with local copies, obviously).

If we had  such GitHub-like infrastructure, a figshare/DropBox  combo that took care of our data and an Authorea/FidusWriter authoring environment, we could routinely do what we have done as a proof of principle in our latest paper: When you write the paper, you don’t have to artificially design any actual figures any more. The authors just insert the code that calls the software to evaluate the linked, open data. This allows the reader to not only generate their own figures from different perspective from our data (as in Fig. 3 of our paper), they can also download all the code and data without asking us and without us having to jump through any extra hoops to make our code/data available – it all happens on the invisible back-end. Had we been able to use Authorea/FidusWriter, submission would even have been just a single click. I get furious every time I estimate the amount of time and effort I could save if this infrastructure were in place today, as it should be.

Another thing one could do with such an infrastructure would be to open up certain datasets (and hence figures) to contributions from other labs, e.g. to let others compare their own results with yours. We demonstrated this “hey look what we found, how does that look for you?” kind of functionality in Fig. 4.

More or less automated semantic tagging would allow us to leverage the full potential of semantic web technology in order to facilitate some of the features I haven’t yet been able to imagine.

As a reviewer:

A reviewer is a special kind of reader, quite obviously. As such, all the above-mentioned features would also benefit the reviewer. However, there is a feature that is special for the reviewer: direct, if need be anonymized discussions with the author of a manuscript or proposal under review. Of course, this discussion would be available with the final version of the paper, where appropriate. In this discussion, the reviewers (invited, suggested and voluntary) and authors would be working on a fully annotated version of the manuscript, significantly reducing the time required for reviewing and revising manuscripts. Editors would only ever come in to help solve any points of contention that cannot be resolved by reviewers/authors themselves. Some publishers already implement such discussion to some extent, but none that I know of use an authoring environment, as would be the rational solution.

As an evaluator:

There is no way around reading publications in order to evaluate the work of scientists. There are no shortcuts and no substitutes. Reading publications is a necessary condition, a conditio sine qua non, for any such evaluation. However, it is only a sufficient criterion in the best of all worlds. Only in a world without bias, misogyny, nepotism, greed, envy and other human shortcomings, would reading publications be sufficient for evaluating scientific work. Unfortunately, some may say, scientists are humans and not immune to human shortcomings. Therefore (and because the genie is out of the bottle), we need to supplement expert judgment with other criteria. These criteria, of course, need to be vetted by the scientific method. The current method of ranking journals and then ranking scientists according to where they know the editors of such journals is both anti-scientific and counter-productive.

If we had a fully functional infrastructure possible with today’s technology, we’d be able to collect data from each scientist with regard to their productivity (data, code, publications, reviews), popularity (downloads, media presence, citations, recommendations), teaching (hours, topics, teaching material) or service (committees, administration, development). To the extent that this is (semi-)automatically possible, one could even collect data about the methodological soundness of the research. If we, as a scientific community, hadn’t spent the last 20 years in a digital cave, we’d be discussing about the ethics of collecting such data, about how these data are or are not correlated with one another, about the degree of predictive power of some of these data for future research success and other such matters – and not about how we one day might be able to actually arrive in the 21st century.

—

All of the functionalities mentioned above (and many more I haven’t mentioned) are already being tried here and there to various degrees and in various combinations. However, as standalone products none of them are really going to ever be more than just interesting ideas, proofs of concept and demonstrations. What is required is an integrated, federated backbone infrastructure with a central set of standards, into which such functionalities can be incorporated as plug-ins (or ‘apps’). What we need for this infrastructure is a set of open, evolvable rules or standards, analogous to TCP/IP, HTTP and HTML, which can be used to leverage key technologies for the entire community at the point of development – and not after decades of struggle against corporate interests, legal constraints or mere incompetence.

It is also clear that this infrastructure needs to be built robustly. Such a core infrastructure cannot rely on project funds or depend on the whims of individual organizations, even if they be governments. In fact, the ideal case would be a solution similar or analogous to BitTorrent technology: a world-wide, shared infrastructure where 100% of the content remains accessible and operational even when 2/3 of the nodes go offline. Besides protecting scholarly knowledge against funding issues and the fickleness of individual organizations, such a back-end design could also protect against catastrophic regional devastations, be they natural or human made.

Such technology, I think that much is clear and noncontroversial, is readily available. The money, is currently locked up in subscription funds, but cancellations on a massive scale are feasible without disrupting access much and will bring in just over US$9b annually – more than enough to build this infrastructure within a very short timeframe. Thus, with money and technology readily available, what’s keeping the scientific community from letting go of antiquated journal technology and embracing a modern scholarly communication infrastructure? I’ve mentioned human shortcomings above. Perhaps it is also an all too human shortcoming to see the obstacles towards such a modern infrastructure, rather than its potential:

busy

Or, as one could also put it, more scientifically: “The square traversal process has been the foundation of scholarly communication for 400 years!”

@brembs "The square traversal process has been the foundation of scholarly communication for 400 years."

— Ian McCullough is @bookscout (@bookscout) April 27, 2015

UPDATE (11/04/2017): There has been a recent suggestion as to how such an infrastructure ma be implemented conceptually. It still contains the notion of ‘journals’, but the layered structure explains quite well how this may work:

Like this:

Like Loading...
Posted on April 27, 2015 at 13:50 211 Comments
Apr21

If only all science were this reproducible

In: own data • Tags: Drosophila, foraging, reproducibility

ResearchBlogging.orgFor our course this year I was planning a standard neurogenetic experiment. I hadn’t ever done this experiment in a course, yet, just two weeks ago I tried it once myself, with an N=1. The students would get two groups of Drosophila fruit fly larvae, rovers and sitters (they wouldn’t know which was which). About ten larvae from each group would be placed on one of two yeast patches on an agar plate. After 20 minutes, they would count the number of larvae in the first patch, the second patch and those in none of the patches, i.e, elsewhere on the plate:

Classic experiment designed by Marla Sokolowski, the discoverer of the rover/sitter polymorphism

Classic experiment designed by Marla Sokolowski, the discoverer of the rover/sitter polymorphism

In the example above, the scores would be 3, 3 and 4 for the rovers and 10, 0, 0 for the sitters.

Yesterday morning, before the course, when I was collecting the vials for the experiment, I saw that I had an additional vial of each stock where I had forgotten to remove the parent flies, such that the vials were completely overcrowded. Remembering that my last slide in the lecture for the course was a result from Kaun et al. 2007, where the authors had discovered that the behavior of the larvae was dependent on food quality, I felt we should try and see if this overcrowding had already deteriorated the food enough to show the effect Kaun et al. had described: that rovers became more like sitters and didn’t leave the food patch they were placed in any more.

So I grabbed the two overcrowded vials and decided to try something that usually would be a recipe for disaster: in an experiment never tried on students, alter the conditions such that you wouldn’t know the outcome yourself. The plan was that in case we wouldn’t see a strong difference between the two strains, I’d go and fetch the two vials I had prepared with the correct density of larvae and hence proper food quality. With these latter flies, at least, we should see the difference between rovers and sitters. Here are the data the 12 students collected on the blackboard from the experiments done on a single afternoon:

Photograph of the balckboard with all the data from the 12 students.

Photograph of the blackboard with all the data from the 12 students.

One can already see from the numbers alone that the difference between the strains becomes very clear in the low-density case (blue) while it is less pronounced in the case of the overcrowded larvae (white). But because the result so clearly matches the results from Kaun et al, I’ve plotted the data and compared them side-by side.

First the data from Kaun et al. which show that with low food quality (15% – corresponding to our high-density larvae), rover and sitter larvae consume about the same amount of food and both more than with perfect quality food (100% – corresponding to our low density conditions):

Data from Kaun et al. 2007 on the effect of food quality of food intake in rover and sitter fruit flies.

Data from Kaun et al. 2007 on the effect of food quality of food intake in rover and sitter fruit flies.

As we didn’t measure food intake, but just the location of the larvae, we’ll take the percentage of larvae on the first patch as a measure of staying and feeding instead of leaving the patch and searching for another food patch. That way, we got an all but identical graph:

Data from 12 students on a single afternoon

Data from 12 students on a single afternoon

This replication provided an opportunity to emphasize two general points: first, it is highly unusual to reproduce previously published data with such ease and to such an astonishing degree. Second, I reiterated what I had already said during the lecture: there is no genetic determinism. Even humble fruit fly larvae show that different genotypes do not necessarily mean different fate – it depends strongly on the environment, if any genotypic differences manifest themselves in phenotypic differences.


Kaun, K., Riedl, C., Chakaborty-Chatterjee, M., Belay, A., Douglas, S., Gibbs, A., & Sokolowski, M. (2007). Natural variation in food acquisition mediated via a Drosophila cGMP-dependent protein kinase Journal of Experimental Biology, 210 (20), 3547-3558 DOI: 10.1242/​jeb.006924

Like this:

Like Loading...
Posted on April 21, 2015 at 14:36 7 Comments
  • Page 12 of 21
  • « First
  • «
  • 10
  • 11
  • 12
  • 13
  • 14
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,961 views)
  • Sci-Hub as necessary, effective civil disobedience (22,947 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,484 views)
  • Booming university administrations (12,906 views)
  • What should a modern scientific infrastructure look like? (11,457 views)
  • Edgewise
  • Embrace the uncertainty
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous biting in the marine snail Aplysia
Spontaneous biting in the marine snail Aplysia

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d