bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 111 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 409 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 680 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1602 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1557 downloads 0.00 KB
Download
Mar14

Seeking your endorsement

In: science politics • Tags: european commission, open science

I am contemplating to apply to join the European Commission Open Science Policy Platform. The OSPP will provide expert advice to the European Commission on implementing the broader Open Science Agenda. As you will see, some of us have a concern that the focus of the call is on organizations, not communities. This is a departure from much of the focus that the Commission itself has adopted on the potential benefits and opportunities of Open Science. A group of us are therefore applying as representatives of the community of interested and experienced people in the Open Science space.

Amongst others I am therefore asking for your endorsement, in the form of a comment on this post or email directly to me if you prefer, as someone who can represent this broader community of people, not necessarily tied to one type of organization or stakeholder. Depending on the number of endorsements, I will consider submitting my application. Deadline is march 22, 2016.

Application:

I have been urged to apply for a position on the advisory group ‘Open Science Policy Platform‘ as an individual representing the common interests shared by people and organizations from across the spectrum of stakeholders including doctors, patients and their organizations, researchers, technologists, scholarly IT service providers, publishers, policy makers, funders and all those interested in the change undergoing research. In addition to those directly involved in Open Science, I also represent the common interests shared by experimental scientists at public institutions, in particular those working in biomedical research, whether or not they are already engaging in Open Science themselves.

Many of us have a concern that the developing policy frameworks and institutionalization of Open Science is leaving behind precisely the community focus that is at the heart of Open Science. As the Commission has noted, one of the key underlying changes leading to more open practice in research is that many more people are becoming engaged in research and scholarship in some form. At the same time the interactions between this growing diversity of actors increasingly form an interconnected network. It is not only that this network reaches beyond organizational and sector boundaries but that it is precisely that blurring of boundaries is what underpins the benefits of Open Science.

I recognize that for practical policy-making it is essential to engage with key stakeholders with the power to make change. In addition I would encourage the Commission to look beyond the traditional sites of decision-making power within existing institutions to the communities and networks which are where the real cultural changes are occurring. In the end, institutional changes will only ever be necessary, and not sufficient, to support the true cultural change which will yield the benefits of Open Science.

I am confident I can represent the interests of this community, particularly by assisting in developments concerning the implementation of a cloud-based scholarly infrastructure supporting not only our text-based research outputs, but especially the integration of research data and scientific source code with the narrative, be it text, audio or video-based. I will also contribute evidence to policy decisions regarding research integrity.

I base my confidence on my track record covering the last 12 years. I have been involved in Open Science advocacy since about 2004. Since then, I have been invited speaker and keynote lecturer at numerous Open Science events every year. My advice is being sought by Open Access organizations such as the Public Library of Science, Force11, Frontiers, ScienceOpen, PeerJ or F1000. In fact, most of the recent F1000 innovations appear very similar to what I (and no doubt others) have proposed. I run an Open Science laboratory where all our source code and research data are being made openly accessible either immediately, as they are being created/collected, or upon publication/request. We have pioneered exploiting the advantages the infrastructure of our laboratory provides. For instance, we have collaborated with F1000Research to publish an article where the reader can not only choose the display format of the research data, or which aspect of the data should be displayed, but where they can also contribute their own data for comparison and extension of the published research.

My perspective is shaped not only by my interactions with fellow scholars, librarians or publishers. I also collect the available empirical data to objectively assess the state of the current scholarly infrastructure. One of the insights we have gained from this work is that the most prestigious scholarly journals publish the least reliable science. The practice of selecting scholars publishing in these prestigious journals arguably contributes to the unfolding replication crisis. Thus, a drop in research integrity has been observed in recent years, which can be traced back to inadequate, antiquated infrastructure, providing counter-productive incentives and reward structures. I will bring to the table the evidence-based perspective that our public institutions need a modern digital infrastructure, if our aim is to prevent further deterioration of research integrity and hence credibility. This position holds that the current, largely journal-based and publisher-provided infrastructure is not only counter-productive, but also unnecessarily wasteful. The evidence suggests that the global scholarly community stands to save ~US$9.8 billion annually if current subscription moneys were instead invested in a modern, institutional infrastructure. Such a transition would not only maintain current functionalities, it would also provide universal access to all scholarly knowledge. The saved funds would provide ample opportunities for acquiring new functionalities, provided, for instance, by emerging scholarly IT service providers, representatives of which will likely be among the experts on the Open Science Policy Platform. The saved funds would also allow implementation of a sustainable infrastructure ensuring long-term accessibility and re-use of research data as well as scientific source code. The common, federated standards and specifications of this infrastructure will overcome current fragmentation and enhance interoperability of all forms of scholarly output. Europe is spearheading the development of such an infrastructure. Given the proposed 6.15b€ for the European Cloud Initiative, the evidence suggests that the transition will likely be cost-neutral overall and potentially even cost-saving.

 

Like this:

Like Loading...
Posted on March 14, 2016 at 22:32 98 Comments
Mar08

How do academic publishers see their role?

In: science politics • Tags: publishers

Over the years, publishers have left some astonishingly frank remarks over how they see their role in serving the scholarly community with their communication and dissemination needs. This morning, I decided to cherry-pick some of them, take them out of context to create a completely unrealistic caricature of publishers that couldn’t be further from the truth. However, I’ll leave the links to the comments, so you can judge for yourself just how out of context they actually have been taken.

Essentially all of these comments were voiced on the blog of the Society for Scholarly Publishing, an organization representing academic publishers. For one of the commenters, Joseph Esposito, it is likely safe to assume that his continued presence as main contributor to the blog means that these viewpoints reflect the general viewpoints of the members of this association closely enough to not warrant dismissal from the site. The other quoted commenter, Sanford Thatcher, is not a contributor to the blog at all, so there is no direct way of estimating how representative his views are. Both commenters are or have been either publishers themselves or consult publishers in various roles.

  1. Publishers don’t add any value to the scholarly article:

Now you can find an article simply by typing the title or some keywords into Google or some other search mechanism. The Green version of the article appears; there is no need to seek the publisher’s authorized version.

Source.

2. Publishers’ business of selling scholarly articles to a privileged few is not negotiable

Screenshot via Mike Taylor

Screenshot via Mike Taylor

3. The purpose of academic publishers is to make money, not to serve the public interest:

It is not the purpose of private enterprises to serve the public interest; it is to serve the interests of their stockholders. On the other hand, it is the purpose of the federal government to serve the public interest.

Source.

4. Governments ought to serve the public interest by funding all scholarly communication:

you should be urging the government to better disseminate the results of the research it sponsors.

Source.

Let’s take these comments and completely mangle the impression publishers publicly express of themselves: “We don’t really have anything of value to contribute, but it is our non-negotiable fiduciary duty to make as much money off the public purse as possible. If you want to change that, you should take all the tax-money we’ve suckered you into handing over to us and build a sustainable scholarly communication infrastructure yourselves.” Couldn’t have said it better myself, actually.

Like this:

Like Loading...
Posted on March 8, 2016 at 11:47 36 Comments
Mar03

Academic publishers: stop access negotiations

In: science politics • Tags: esposito, open access, publishers

Three years ago, representatives of libraries, publishers and scholars all agreed that academic publishers don’t really add any value to scholarly articles. Last week, I interpreted Sci-Hub potentially being a consequence of scholars having become tired after 20 years of trying to wrestle their literature from the publishers’ stranglehold by small baby-steps and through negotiations and campaigning alone. Maybe the developments could be an indication that the frustration may be growing among scholars, readying them to break ranks with publishers altogether?

After 20 years of negotiations about how to realize universal open access to all scholarly literature with publishers, maybe it’s time to stop negotiations and develop an open access infrastructure without publishers? After all, it would save human lives as well as billions of dollars every year.

I had not anticipated support for the notion of stopping negotiations with publishers from the same person who also confirmed that publishers add little value to scholarly articles three years ago, Joseph Esposito. In his own words, Mr. Esposito is a “publishing consultant”, working for publishers involved in research publishing. He advises these companies on strategies concerning, among other issues, open access. Until this writing, he has penned 253 articles for the blog of the Society for Scholarly Publishing, an organization representing academic publishers. It is probably safe to assume that his continued presence at this blog after such a number of posts can be taken as an indicator that his opinions expressed there are generally not in obvious disagreement with those of the academic publishers as members of the society. His continued success as consultant to some of said society members can also be taken as an indication that his advice is being followed by his clients. In brief, the word of Mr. Esposito has carried and continues to carry some significant weight with publishers.

For the second time in three years, Mr. Esposito and I agree on something: we should stop negotiating access with legacy publishers:

Screenshot via Mike Taylor

Screenshot via Mike Taylor

Quite clearly (this is the full account of the entire comment, so it cannot be taken out of context), for Mr. Esposito, access to the scholarly literature is a privilege worth paying for. Moreover, he sees no need to negotiate this position any further. Inasmuch as this opinion instructs his advice to publishers, scholars should not be surprised, for instance, that publishers actively block contentmining and will not negotiate about this blockade of science. This opinion also reinforces my assessment that talking with legacy publishers, at this point, has become a complete waste of time. This is how far they are willing to go and no further concessions can be expected.

Like this:

Like Loading...
Posted on March 3, 2016 at 17:28 28 Comments
Feb25

Sci-Hub as necessary, effective civil disobedience

In: science politics • Tags: Elbakyan, publishing, sci-hub

Stevan Harnad’s “Subversive Proposal” came of age last year. I’m now teaching students younger than Stevan’s proposal, and yet, very little has actually changed in these 21 years. On the contrary, one may even make the case that while efforts like institutional repositories (green OA), open access journals (gold OA) or preprint archives have helped to make some of the world’s scholarly literature more accessible (estimated to now be at more than 40% of newly published papers), we are now facing problems much more pernicious than lacking access: most of our data and essentially all of our scientific source code is not being archived nor shared, our incentive structure still rewards sloppy or fraudulent scientists over meticulous, honest ones, and the ratchet of competing for grants just to keep the lights in the lab on is driving the smartest young minds out of academia, while GlamHumping marketeers accumulate.

While one may not immediately acknowledge the connection between access to the literature and the more pernicious problems I’ve alluded to, I’d argue that by ceding our control over our literature to commercial publishers, we have locked ourselves into an anachronistic system which is the underlying cause for most if not all our current woes. If that were indeed the case, then freeing us from this system is the key to solving all the associated problems.

Some data to support this perspective: we are currently spending about US$ 10b annually on legacy publishers, when we could publish fully open access for about US$200m per year if we only were to switch publishing to, e.g. SciELO, or any other such system. In fact, I’d argue that the tax payer has the right to demand that we use their tax funds only for the least expensive publishing option. This means it is our duty to the citizens to reduce our publishing expenses to no more than currently ~US$200m per year (and we would even increase the value of the literature by making it open to boot!). If we were to do that, we’d have US$9.8b every single year to buy all the different infrastructure solutions that already exist to support all our intellectual outputs, be that text, data or code. Without journals (why would one keep those?), we’d also be switching to different metrics to assist us in minimizing the inherent biases peer-review necessarily brings about. We would hence be able not only to provide science with a modern scholarly infrastructure, we could even use the scientific method to assist us in identifying the most promising new scientists and which of them deserve which kind of support.

While many of the consequences of wasting these infrastructure funds on publishers have become apparent only more recently, the indefensibility of ever-increasing subscription pricing in a time of record-low publishing costs, was already apparent 20 years ago. Hence, already in 1994, it became obvious that one way of freeing ourselves from the subscription-shackles was to make the entire scholarly literature available online, free to read. Collectively, these two decade-long concerted efforts of the global OA community, to wrestle the knowledge of the world from the hands of the publishers, one article at a time, has resulted in about 27 million (24%) of about 114 million English-language articles becoming publicly accessible by 2014. Since then, one single woman has managed to make a whopping 48 million paywalled articles publicly accessible. In terms of making the knowledge of the world available to the people who are the rightful owners, this woman, Alexandra Elbakyan, has single-handedly been more successful than all OA advocates and activists over the last 20 years combined.

Let that accomplishment sink in for a minute.

Of course it isn’t all global cheering and party everywhere. Obviously, the publishers complain that she used her site, Sci-Hub, to ‘steal their content‘ – with their content being, of course, the knowledge of the world that they have been holding hostage for a gigantic ransom. For 20 years this industry has thrived at the public teat, parasitizing an ever-increasing stream of tax-funded subsidies to climb from record profits to record profits, financial crises be damned. Of course, they are very happy to seize on this opportunity to distract from the real problems we’re facing, by staging a lawsuit to keep their doomed business practices running for yet a little longer. Perhaps more amusingly, one suggestion from the publishers of how to respond to Sci-Hub is to make access even more restrictive and expensive. I’ve only been around the OA movement for 10 years, but the ignorance, the gall and the sheer greed of publishers has astounded me time and time again. Essentially, in my experience, the only reply we ever got from publishers to our different approaches to reform our infrastructure, has been one big raised middle finger. Clearly, two decades of negotiations, talks and diplomacy have led us nowhere. In my opinion, the time to be inclusive has come and passed. Publishers have opted to remain outside of the scholarly community and work against it, rather than with it. Actions of civil disobedience like those of Aaron Swartz and Alexandra Elbakyan are a logical consequence of two decades of stalled negotiations and failed reform efforts.

In the face of multinational, highly profitable corporations citing mere copyright when human rights (“Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.”) are at stake, civil disobedience of the kind Sci-Hub is a great example of, becomes a societal imperative.

But even from within the OA community Alexandra Elbakyan is receiving some flak for a whole host of – compared to 48 million freed articles – tangential reasons, such as licensing, diluting the OA efforts, or scholarly societies. Of course, she reacted defensively, which is understandable for a host of reasons. However, one shouldn’t necessarily see these comments as criticism. They’re part of the analysis of the situation and this is what must happen continuously to monitor how we are doing. Just because Sci-Hub isn’t a panacea and solves all our problems for us so we can all go back and do actual science, doesn’t mean that the overall effort is any less heroic or impressive.

Part of this assessment has to be the clear realization that of course Sci-Hub is not the cure to our self-inflicted disease. However, given that 20 years of careful, fully legal, step-wise, evolutionary approaches have yielded next to nothing in return, more spectacular actions may be worth considering, even if they don’t entail the immediate realization of the ideal utopia. After all, two decades is not what I consider a timeframe evincing a lack of patience. I can’t believe anybody in the OA community will seriously complain that the single largest victory in a 20-year struggle doesn’t also solve all our other, associated problems in one fell swoop. Or let me frame that a little differently: once you can boast a track record of having freed 48 million articles, then you get to complain or criticize.

Part of our ongoing assessment also has to be the discussion of whether the investment in the baby-steps of the last two decades was worth the minuscule returns. Sci-Hub has the potential to encourage and inspire other academics to stand up to the status quo and demand effective reforms, maybe even taking action themselves. Sci-Hub clearly is not how one would design a scholarly infrastructure, but it has been more effective at accomplishing access than anything in the last 20 years.

Besides saving lives by making 48 million research papers accessible to patients and doctors, Sci-Hub to me signifies that the scientific community (well, admittedly, a tiny proportion of it), is starting to lose its patience and becomes ready for more revolutionary reform options. A signal that the community starts to feel that it is running out of options for evolutionary change. To me, Sci-Hub signals that publisher behavior, collectively, over the last two decades has been such a gigantic affront to scholars that civil disobedience is a justifiable escalation. Personally, I would tend to hope that Sci-Hub (and potentially following, increasingly radical measures) would signal that time has run out and that the scientific community is now ready to shift gears and embark on a more effective strategy for infrastructure reform.

Although I realize that it’s probably wishful thinking.

The freed articles, Alexandra Elbakyan’s David-like chutzpah against publishing Goliath Elsevier et al., as well as the deeply satisfying feeling of the public good not being completely helpless in the face of private monetary interests are the main factors why I am in awe of Alexandra Elbakyan’s accomplishment. If only the OA movement consisted of a few more individuals cut from that same wood, we might have never arrived at a point where Sci-Hub was necessary. I openly admit that I’m not even close to playing in that league and the realization hurts.

Analogies, metaphors and allegories always only go so far, but the parallels here are too numerous to ignore:

Finally, there still remains the question as to how Sci-Hub was able to obtain the credentials it uses to free the articles. As of this writing, not a whole lot is known, so for now we will have to assume that nobody was put in harm’s way. The size and probability of such potential harm may hypothetically influence the overall assessment of Sci-Hub, but at this point I would tentatively consider such potential negative consequences as minor, compared to the benefits.

Like this:

Like Loading...
Posted on February 25, 2016 at 18:35 57 Comments
Feb02

Earning credibility in post-factual science?

In: science politics • Tags: politics, post-factual science, society, truthiness

What do these two memes have in common?

falsememesWhile they may have more than one thing in common, the point important for now is that despite both having an air of plausibility or ‘truthiness’ around them, they’re both false: neither has Donald Trump ever said these words to People magazine, nor do gay canvassers have such an effect on people’s attitudes (even though the quoted statement about this research was indeed published in Science magazine).

The issue of false facts has become so rampant in current politics, that some have dubbed our common era “post-factual”. While on the surface, “post-factual science” appears to be an oxymoron, recent evidence raises the tantalizing possibility that at least the literature of the life-sciences, broadly speaking, may be on track to leaving the reality-based community (although more research is required):

Sources: https://journals.plos.org/plosbiology/article?id=info:doi/10.1371/journal.pbio.1002165 and https://science.sciencemag.org/content/349/6251/aac4716

Irreproducibility loosely defined as in “not replicated”, or “difficult to replicate with imprecise methods”. Note that not all of these are replication studies and not all of the replication studies are properly published themselves, with data available etc. To my knowledge, only the Open Science study is reliable in that respect. Sources: https://journals.plos.org/plosbiology/article?id=info:doi/10.1371/journal.pbio.1002165 and https://science.sciencemag.org/content/349/6251/aac4716

Obviously, this data can only hold for experimental sciences and even there I would expect huge differences between the different sub-fields. Nevertheless, the frequency of retractions in biomedicine is rising exponentially, the ‘majority of research findings’ was estimated and more recently supported (at least tentatively for the findings analyzed/replicated in the above six studies) to be false and the public trust in science is eroding. Maybe it is not too early to start asking the question: if we can’t even trust the scientific literature, what information is trustworthy? Or, put differently: if traditional (some would say legacy) strategies for assigning credibility are failing us, are there more modern strategies which can replace or at least support them?

Personally, I think this is one of the most urgent and important challenges of the post-internet globalized society. It may well be that science, which brought us the internet in the first place, may also be the right place to start looking for a solution.

Truth has never been an easy concept. Some may even argue that a large portion of humanity is probably quite happy with it being rather malleable and pliable. It wasn’t really until the 20th century that epistemologists have explicitly formulated a convention of what constitutes a scientific fact and how they can be used to derive and test scientific statements. Obviously, in life outside of science we are still far from an explicit convention, which complicates matters even further.

Be that as it may, neither within nor outside of science can we expect every individual to always fact-check, question and investigate every single statement or bit of information ever encountered, no matter how much that may be desired. There will always, inevitably have to be shortcuts and we have been taking them, legitimately, I would argue, since the beginning of humanity.

Initially, these shortcuts involved authority. With enlightenment and the shedding of both religious, societal and political shackles, a (competing?) more democratic and informal shortcut was conceived (without ever completely replacing authority): popularity. If a sufficiently large number of information outlets confirmed a statement, it was considered sufficient for believing it. Both shortcuts are still being used today both inside and outside of science, for very legitimate reasons. However, in part because the choice of authority is often arbitrary and may even be subject to partisan dispute, there are few, if any, universal authorities left. Even in science, the outlets with the most authority have been shown to publish the least reliable science. This erosion of the ‘authority’ shortcut accelerated with the dawn of the internet age: never before was it so easy to get so many people to repeat so many falsehoods with such conviction.

The ‘wisdom of the crowd’ seems to suggest that a  sufficiently large crowd can be at least as accurate as a small number of expert authorities, if not more so. Social media have the uncanny ability of always aggregating what subjectively feels like a “sufficiently large crowd” to solidly refute any and all authority, whether that would be on the moon landings, the 9/11 attacks, vaccination effectiveness/risk, climate change, crime/gun control, or spherical earth. Obviously, this not only constitutes a fatal misunderstanding of how the crowd becomes wise, it also contributes to an unjustified, exaggerated distrust in entities which do have significant expertise and hence credibility.

There are several reasons as to why social media, as currently implemented are notoriously incapable of getting anywhere near an ideal ‘wisdom of the crowd’ effect. For one, social feedback loops tend to aggregate people who think alike, i.e., reduce heterogeneity, when diversity is one of the most decisive factors in achieving crowd wisdom. Second, with our stone-age concept of a crowd, we may intellectually understand, but fail to intuitively grasp that any group of less than a few tens of millions is more of an intimate gathering rather than a crowd, on internet scales. With today’s social media implementation, it is comparatively easy to quickly gather a few million people who all share some fringe belief.

For instance, if we round the incidence of schizophrenia to about 1% of the population and assume, for simplicity’s sake, that all of them harbor either auditory hallucinations or some other forms of delusions. Hence, if 1% of all internet users were delusional in some way or form and only half of them aggregated in an online patient forum, we’d be talking about more than 15 million individuals. I’m pretty sure that such a patient site would deliver some quite astounding news feed with an amazing commentariat. And this is just one of a growing list of psychiatric disorders. How many nominally healthy users would feel compelled to believe the news from this site, re-share items and comment approvingly? Most far-out-there communities have orders of magnitudes less users but disproportionately large visibility. Indeed, some of those communities may appear just like a subforum of such a patient site, but without a single user actually being diagnosed with any psychiatric disorder whatsoever.

So if we want to take advantage of the micro-expertise of the individuals in a crowd, that crowd needs to be not only ‘sufficiently’ large for the task at hand, it more importantly needs to be sufficiently diverse, or size quickly becomes almost completely irrelevant. From numerous examples in science and elsewhere, it seems straightforward to argue that we need to harness the individual micro-expertise that anyone can have, without making the mistake to attribute this expertise to everyone. Authority alone cannot serve as a reliable shortcut for credibility, but neither can popularity alone. Here is an idea of how one might combine them.

I may be wrong, but at least for now I would argue that we probably cannot start from scratch, assigning every institution, organization and individual the same kind of credibility. We cannot and should not undo history and track records based in evidence: there are information sources that have a higher credibility than others.

Further, we probably need a score or at least ranks that get computed iteratively and recursively. For any individual or piece of information to gather points or climb ranks, there need to be arbiters that already have some credibility – another reason why we likely won’t be able to start with a level playing field. What is less clear is how such a scoring/ranking system ought to be designed: I somehow have the impression that it ought to be difficult to earn credibility, shouldn’t it? Of course, it’s usually always “innocent until proven guilty”, but is this a practical approach when doling out credibility? Should we all start with a credibility of 100 and then lose it? Or should we start with 0 and then gain? Does such a score have to go negative?

So far, these ideas have been very vague and general. Here are my first thoughts on how one may go about implementing such a system in science. A prerequisite for such a system is, of course, a modern scholarly information infrastructure. This won’t work with the 350 year-old technology we call ‘journals’.

Because we need diversity and inclusiveness, one would never prevent anybody from posting whatever they have discovered. However, if someone described a discovery from a known research institution, that discovery would receive more credibility than if it were posted by a layman without a track record (even though both scores would still be relatively low at the point of publication). Similarly, if the author list contained professors, the article would receive more credibility than if there were only graduate student authors. Yet more credibility would be assigned if the data and code relevant for the discovery were openly available as well. Once this initial stage had been completed, the document and its affiliated authors and institutions can earn even more credibility, for instance if the code gets verified, or the data checked for completeness and documentation. Those doing the verification and reviewing also need to be diverse, so also here there should not be a principle limit. However, the weight given to each verification (or lack thereof) will be different according to the scores of the person doing the verification.  More credit for reproducible data analysis (e.g. via docker et al.) and if the narrative accompanying the data/code is supported by them. This whole process would be similar to current peer-review, albeit more fine-grained, likely by more people (each contributing a smaller fraction of the reviewing work) and not necessarily on each and every article.

This process continues to accrue (or lose) credibility inasmuch as the article is receiving attention with consequences, e.g., how many views for each citation (in accordance with a citation typology, e.g. CiTO), how many views for each comment, endorsement or recommendation, etc. This is one possible way of normalizing for field size, another could be by analyzing citation networks (as in, e.g. RCR). Clearly, most credibility ought to be associated with the experiments in each article being actually reproduced by independent laboratories (i.e., a special kind of citation).

In this iterative process, each participant receives credit in various ways for their various activities. Credibility would be just one of several factors being constantly monitored. Points can be awarded both for receiving (and passing!) scrutiny from others as well as scrutinizing other people’s work. The resulting system is thought to allow everyone to check the track record of everyone else for some data on how reliable the work of this person (or institution, or community) has been so far, along with more details on how the track record was generated.

Obviously, there are plenty of feedback loops involved in this system, so care has to be taken to make these loops balance each other. The feedback loops found in many biological processes would serve as excellent examples of how to accomplish this. Complex systems like this are also known to be notoriously difficult to game.

Those are still very rough ideas, without a clear picture of the most suitable or effective implementation, yet, or whether the desired outcomes can actually be achieved in this way. I also have no good idea how one would take such a system to leverage it outside of science. I would like to hope, however, that by starting on the easier case of science, we may be able to approach a related system for the society at large.

Like this:

Like Loading...
Posted on February 2, 2016 at 15:05 6 Comments
Jan12

Even without retractions, ‘top’ journals publish the least reliable science

In: science politics • Tags: impact factor, journal rank, publishing, retractions

tl;dr: Data from thousands of non-retracted articles indicate that experiments published in higher-ranking journals are less reliable than those reported in ‘lesser’ journals.

Vox health reporter Julia Belluz has recently covered the reliability of peer-review. In her follow-up piece, she asked “Do prestigious science journals attract bad science?“. However, she only covered the data on retractions, not the much less confounded data on the remaining, non-retracted literature. It is indeed interesting how everyone seems to be attracted to the retraction data like a moth to the flame. Perhaps it’s because retractions constitute a form of ‘capital punishment’, they seem to reek of misconduct or outright fraud, which is probably why everybody becomes so attracted – and not just journalists, scientists as well, I must say. In an email, she explained that for a lay audience, retractions are of course much easier to grasp than complicated, often statistical concepts and data.

However, retractions suffer from two major flaws which make them rather useless as evidence base for any policy:

I. They only concern about .05% of the literature (perhaps an infinitesimal fraction more for the ‘top’ journals 🙂
II. This already unrepresentative, small sample is further confounded by error-detection variables that are hard to trace.

Personally, I tentatively interpret what scant data we have on retractions as suggestive that increased scrutiny may only play a minor role in a combination of several factors leading to more retractions in higher ranking journals, but I may be wrong. Indeed, we emphasize in several places in our article on precisely this topic that retractions are rare and hence one shouldn’t place so much emphasis on them, e.g.:
“These data, however, cover only the small fraction of publications that have been retracted. More important is the large body of the literature that is not retracted and thus actively being used by the scientific community.”
Given the attraction of such highly confounded data, perhaps we should not have mentioned retraction data at all. Hindsight being 20/20 and all that…

Anyway, because of these considerations, the majority of our article is actually about the data concerning the non-retracted literature (i.e., the other 99.95%). In contrast to retractions, these data do not suffer from any of the above two limitations: we have millions and millions of papers to analyze and since all of them are still public, there is no systemic problem of error-detection confounds.

For instance, we review articles that suggest that (links to articles in our paper):

1. Criteria for evidence-based medicine are no more likely to be met in higher vs. lower ranking journals:
Obremskey et al., 2005; Lau and Samman, 2007; Bain and Myles, 2005; Tressoldi et al., 2013

2. There is no correlation between statistical power and journal rank in neuroscience studies:
Figure 2:

Fig-2

3. Higher ranking journals tend to publish overestimates of true effect sizes from experiments where the sample sizes are too low in gene-association studies:
Figure 1C:

effectsizeoverestimation

4. Three studies analyzing replicability in biomedical research and found it to be extremely low, not even top journals stand out:
Scott et al., 2008; Prinz et al., 2011; Begley and Ellis, 2012

5. Where quality can actually be quantified, such as in computer models of crystallography work, ‘top’ journals come out significantly worse than other journals:
esp. Fig. 3 in Brown and Ramaswamy, 2007

chrystallography
After our review was published, a study came out which showed that

6. In vivo animal experimentation studies are less randomized in higher ranking journals and the outcomes are not scored more often in blind in higher-ranking journals either:

reporting

Hence, in these six (nine including the update below) areas, unconfounded data covering orders of magnitude more material than the confounded retraction data reveal only two out of three possible general outcomes:

a) Non-retracted experiments reported in high-ranking journals are no more methodologically sound than those published in other journals.
b) Non-retracted experiments reported in high-ranking journals are less methodologically sound than those published in other journals

Not a single study we know of (there may be some we missed! Let me know.) shows the third option of higher-ranking journals publishing the most sound experiments. It is this third option that at least one analysis should have found somewhere if there was anything to journal rank with regard to reliability.

Hence, even if you completely ignore the highly scattered and confounded retraction data, experiments published in higher ranking journals are still less reliable than those published in lower ranking journals – and error-detection or scrutiny has nothing to do with it.
In that view, one may interpret the observation of more retractions in higher ranking journals as merely a logical consequence of the worse methodology there, nothing more. This effect may then, in turn, be somewhat exaggerated because of higher scrutiny, but we don’t have any data on that.

All of this data is peer-reviewed and several expert peers attested that none of the data in our review is in dispute. It will be interesting to see if Ms. Belluz will remain interested enough to try and condense such much more sophisticated evidence into a form for a lay audience. 🙂

UPDATE (9/9/2016): Since the publication of this post, two additional studies have appeared that further corroborate the impression that the highest ranking journals publish the least reliable science: In the field of genetics, it appears that errors in gene names (and accession numbers) introduced by the usage of Excel spreadsheets are more common in higher ranking journals:

excel_errors

The authors speculate that the correlation they found is due to higher ranking journals publishing larger gene collections. This explanation, if correct, would suggest that, on average, error detection in such journals is at least not superior to that in other journals.

The second study is on the statistical power of cognitive neuroscience and psychology experiments. The authors report that statistical power has been declining since the 1960s and that statistical power is negatively correlated with journal rank (i.e., a reproduction of the work above, with an even worse outcome). Moreover, the fraction of errors in calculating p-values is positively correlated with journal rank, both in terms of records and articles (even though I have to point out that the y-axis does not start from zero!):

errorsThus, there are at least three additional measures in these articles that provide additional evidence supporting the interpretation that the highest ranking journals publish the least reliable science.

UPDATE II (9/5/2017): Since the last update, there has been at least one additional study comparing the work in journals with different impact factors. In the latest work, the authors compared the p-values in two different psychology journals for signs of p-hacking and other questionable research practices. Dovetailing the data available so far, the authors find that the journal with the higher impact factor (5.0) contained more such indicators, i.e., showed more signs for questionable research practices than the journal with a lower impact factor (0.8). Apparently, every new study reveals yet another filed and yet another metric in which high-ranking journals fail to provide any evidence for their high rank.

 

UPDATE III (07/03/2018): An edited and peer-reviewed version of this post is now available as a scholarly journal article.

Like this:

Like Loading...
Posted on January 12, 2016 at 10:09 228 Comments
Jan08

Just how widespread are impact factor negotiations?

In: science politics • Tags: impact factor, journal rank, publishing

Over the last decade or two, there have been multiple accounts of how publishers have negotiated the impact factors of their journals with the “Institute for Scientific Information” (ISI), both before it was bought by Thomson Reuters and after. This is commonly done by negotiating the articles in the denominator. To my knowledge, the first ones to point out that this may be going on for at least hundreds of journals were Moed and van Leeuven as early as 1995 (and with more data again in 1996). One of the first accounts to show how a single journal accomplished this feat were Baylis et al. in 1999 with their example of FASEB journal managing to convince the ISI to remove their conference abstracts from the denominator, leading to a jump in its impact factor from 0.24 in 1988 to 18.3 in 1989. Another well-documented case is that of Current Biology whose impact factor increased by 40% after acquisition by Elsevier in 2001. To my knowledge the first and so far only openly disclosed case of such negotiations was PLoS Medicine’s editorial about their negotiations with Thomson Reuters in 2006, where the negotiation range spanned 2-11 (they settled for 8.4). Obviously, such direct evidence of negotiations is exceedingly rare and usually publishers are quick to point out that they never would be ‘negotiating’ with Thomson Reuters, they would merely ask them to ‘correct’ or ‘adjust’ the impact factors of their journals to make them more accurate. Given that already Moed and van Leeuwen found that most such corrections seemed to increase the impact factor, it appears that these corrections only take place if a publisher considers their IF too low and only very rarely indeed if the IF may appear too high (and who would blame them?). Besides the old data from Moed and van Leeuwen, we have very little data as to how widespread this practice really is.

A recent analysis of 22 cell biology journals now provides additional data in line with Moed and van Leeuwen’s initial suspicion that publishers may take advantage of the possibility of such ‘corrections’ on a rather widespread basis. If any errors by Thomson Reuters’ ISI happened randomly and were corrected in an unbiased fashion, then an independent analysis of the available citation data should show that such independently calculated impact factors correlate with the published impact factor with both positive and negative errors. If, however, corrections only ever occur in the direction that increases the impact factor of the corrected journal, then the published impact factors should be higher than the independently calculated ones. The reason for such a bias should be found in missing numbers of articles in the denominator of the published impact factor. These ‘missing’ articles can nevertheless be found, as they have been published, just not counted in the denominator. Interestingly, this is exactly what Steve Royle found in his analysis (click on the image for a larger version):

if_negotiations

On the left, you see that any deviation from the perfect correlation is always towards the larger impact factor and on the right you can see that some journals show a massive number of missing articles.

Clearly, none of this amounts to unambiguous evidence that publishers are increasing their editorial ‘front matter’ both to cite their own articles and to receive citations from outside, only to then tell Thomson Reuters to correct their records. None of this is proof that publishers routinely negotiate with the ISI to inflate their impact factors, let alone that publishers routinely try to make classification of their articles as citable or not intentionally difficult. There are numerous alternative explanations. However, personally, I find the two old Moed and Van Leeuwen papers and this new analysis, together with the commonly acknowledged issue of paper classification by the ISI just about enough to be suggestive, but I am probably biased.

Like this:

Like Loading...
Posted on January 8, 2016 at 15:41 83 Comments
Jan07

How much should a scholarly article cost the taxpayer?

In: science politics • Tags: infrastructure, publishing

tl;dr: It is a waste to spend more than the equivalent of US$100 in tax funds on a scholarly article.

Collectively, the world’s public purse currently spends the equivalent of US$~10b every year on scholarly journal publishing. Dividing that by the roughly two million articles published annually, you arrive at an average cost per scholarly journal article of about US$5,000.

Inasmuch as these legacy articles are behind paywalls, the average tax payer does not get to see what they pay for. Even worse for academics: besides not being able to access all the relevant literature either, cash-strapped public institutions are sorely missing the subscription funds, which could have modernized their digital infrastructure. Consequently, researchers at most public institutions are stuck with technology that is essentially from the 1990s, specifically with regard to infrastructure taking care of their three main forms of output: text, data and code.

Another pernicious consequence of this state of affairs: institutions have been stuck with a pre-digital strategy for hiring and promoting their faculty, namely judging them by the venues of their articles. As the most prestigious journals publish, on average, the least reliable science, but the scientists who publish there are awarded with the best positions (and are, in turn, training their students how to publish their unreliable work in these journals), science is now facing a replication crisis of epic proportions: most published research may possibly be false.

Thus, both the scientific community and the public have more than one reason to try and free some of the funds currently wasted on legacy publishing. Consequently, there are a few new players on the publishing market who offer their services for considerably less. Not surprisingly, in developing countries, where cash is even more of an issue, already more than 15 years ago a publicly financed solution was developed (SciELO) that publishes fully accessible articles at a cost of between US$70-200, depending on various technical details. In the following 15 years, problems have accumulated now also in the richer countries, prompting the emergence of new publishers. Also for these, the ballpark price range from just under US$100 to under US$500 per article is quoted by some of these newer publishers/service providers such as Scholastica, Ubiquity, RIO Journal, Science Open, F1000Research, PeerJ or Hindawi. Representatives of all of these publishers independently tell me that their costs per article range in the low hundreds and Ubiquity, Hindawi and PeerJ are even on record with this price range. [After this post was published, Martin Eve of the Open Library of the Humanities also quoted roughly these costs for their enterprise. I have also been pointed to another article who sets about US$300 per article as an upper bound, also along the lines of all the other sources.]

Tweet link.

Now, as a welcome confirmation, yet another company, Standard Analytics, comes to similar costs in their recent analysis.

Specifically, they computed the ‘marginal’ costs of an article, which they define as only taking “into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations“. I understand this to mean that if an existing publisher wanted to start a new scholarly journal, these would be the additional costs they would have to recoup. The authors mention five main tasks to be covered by these costs:

1) submission

2) management of editorial workflow and peer review

3) typesetting

4) DOI registration

5) long-term preservation.

They calculate two versions of how these costs may accrue. One method is to outsource these services to existing vendors. They calculate prices using different vendors that range between US$69-318, hitting exactly the ballpark all the other publishers have been quoting for some time now. Given that public institutions are bound to choose the lowest bidder, anything above the equivalent of around US$100 would probably be illegal. Let alone 5k.

However, as public institutions are not (yet?) in a position to competitively advertise their publishing needs, let’s consider the side of the publisher: if you are a publisher with other journals and are shopping around for services to provide you with an open access journal, all you need to factor in is some marginal additional part-time editorial labor for your new journal and a few hundred dollars per article. Given that existing publishers charge, on average, around €2,000 per open access article, it is safe to say that, as in subscription publishing, scientists and the public are being had by publishers, as usual, even in the case of so-called ‘gold’ open access publishing. These numbers also show, as argued before, that just ‘flipping’ our journals all to open access is at best a short-term stop-gap measure. At worst, it would deteriorate the current situation even more.

Be that as it may, I find Standard Analytics’ second calculation to be even more interesting. This calculation actually conveys an insight that was entirely new, at least for me: if public institutions decided to run the 5 steps above in-house, i.e., as part of a modern scholarly infrastructure, per article marginal costs would actually drop to below US$2. In other words, the number of articles completely ceases to be a monetary issue at all. In his critique of the Standard Analytics piece, Cameron Neylon indicated, with his usual competence and astuteness, that of course some of the main costs of scholarly communication aren’t really the marginal costs that can be captured on a per-article basis. What requires investment are, first and foremost, standards according to which scholarly content (text/audio/video: narrative, data and code) is archived and made available. The money we are currently wasting on subscriptions ought to be invested in an infrastructure where each institution has the choice of outsourcing vs. hiring expertise themselves. If the experience of the past 20 years of networked digitization is anything to go by, then we need to invest these US$10b/a in an infrastructure that keeps scholarly content under scholarly control and allows institutions the same decisions as they have in other parts of their infrastructure: hire plumbers, or get a company to show up. Hire hosting space at a provider, or put servers into computing centers. Or any combination thereof.

What we are stuck with today is nothing but an obscenely expensive anachronism that we need to dispense of.

By now, it has become quite obvious that we have nothing to lose, neither in terms of scholarly nor of monetary value, but everything to gain from taking these wasted subscription funds and investing them  to bring public institutions into the 21st century. On the contrary, every year we keep procrastinating, another US$10b go down the drain and are lost to academia forever. On the grand scheme of things, US$10b may seem like pocket change. For the public institutions spending them each year, they would constitute a windfall: given that the 2m articles we currently publish would not even cost US$4m, we would have in excess of US$9.996b to spend each year on an infrastructure serving only a few million users. As an added benefit, each institution would be getting back in charge of their own budget decisions – rather than having to negotiate with monopolistic publishers. Given the price of labor, hard- and software, this would easily buy us all the bells and whistles of modern digital technology, with plenty to spare.

Like this:

Like Loading...
Posted on January 7, 2016 at 13:42 90 Comments
Dec17

How free are academics, really?

In: science politics • Tags: academic freedom, DFG, funding, institutions, open access, publishing

In Germany, the constitution guarantees academic freedom in article 5 as a basic civil right. The main German funder, the German Research Foundation (DFG), routinely points to this article of the German constitution when someone suggests they should follow the lead of NIH, Wellcome et al. with regard to mandates requiring open access (OA) to publications arising from research activities they fund.

The same argument was recently made by Rick Anderson in his THE article entitled “Open Access and Academic Freedom“. When it was pointed out both in the comments on his article and on Twitter that the widespread tradition of hiring/promoting/rewarding scientists for publishing in certain venues constituted a much worse infringement, Mr. Anderson replied with a very formalistic argument: such selection by publication venue were mere “professional standards”, which, by definition, cannot impede academic freedom (even if they essentially force scientists to publish in certain venues), while only “official policies” actually can infringe on academic freedom (even if they are not actually enforced, such as many OA mandates, and thus have little/no effect on the choice of publication venue). This potential infringement is thus considered more of a threat to academic freedom than actual infringements, as long as the actual infringements due to ‘professional standards’ are not explicitly written down somewhere and labeled ‘policy’.

While one may take such a formalistic approach, I fail to see how such a position can be either valuable or convincing.

If everyone took that position, it would only mean that our institutions would make their policies less specific and called them “professional standard”. Then our institutions can fire us at will without ever touching our academic freedom: they just need to define professional standards loosely enough. Hence, there is no academic value in such a formalistic approach: an infringement of an academic freedom is always an infringement, no matter what you call it. The important aspect of such infringements (which may be unavoidable) is not whether or not they are written down as explicit ‘policy’, but that we must have very good reasons for them, such as tangible benefits for science and/or society.

I also doubt this argument will be very convincing, as it is just too plain obvious that such a formalistic approach is too far from reality to be worth seriously considering. Imagine two scholars, both working in the same field, collecting the same data, making the same discoveries and solving the same problems. One of them would feel forced to publish, against their own will, their work in a piecemeal fashion in the venues implicitly prescribed by their field on an ongoing basis, the other would decide to exercise their academic freedom and publish the exact same discoveries and solutions in one big text on their blog at the end of their career. From this example, it is clear that we already have a very tangible and real choice: either exercise your academic freedom or have a job in academia, both are incompatible.

Today, it seems unavoidable that the current society likely won’t accept the value of the “full academic freedom” of the AAUP that Rick Anderson referenced, and hence won’t tolerate us exercising it. But this society better provide some pretty darn good reasons for curtailing our ‘full’ civil rights. I can see how forcing us to share our work with the society that funded us would entail such a reason. I cannot see how forcing us to publish, e.g., in venues with a track record for fraud and errors would entail such a reason.

Like this:

Like Loading...
Posted on December 17, 2015 at 15:22 44 Comments
Dec04

How to write your grant proposal?

In: science politics • Tags: funding, grantsmanship, peer-review

Posting my reply to a review of our most recent grant proposal has sparked an online discussion both on Twitter and on Drugmonkey’s blog. The main direction the discussion took was what level of expertise to expect from the reviewers deciding over your grant proposal.

This, of course, is highly dependent on the procedure by which the funding agency chooses the reviewers. At the US-based NIH, as I understand it, reviewers are picked from a known panel, you just don’t know which individuals of the panel. This makes it comparatively easy to target this audience when writing your proposal. The German funding agency we submitted our grant to picks any reviewer, world-wide (I think the NSF in the US is similar, at least I have reviewed a few grants for them). After the review, a separate panel of peers (which may but commonly doesn’t include a reviewer) decides which grants out of the pool get funded, usually without reading the grants in much detail, if at all. In that case, it is impossible to have a clear picture of your audience.

My first grant/fellowship was funded in 2001. Before and since then, I have had plenty of rejections. I believe my overall funding rate over these almost 15 years is somewhere around 20±5%, which means I must have written just under 50 grant proposals in this time. Initially, when my proposals (and manuscripts) got rejected, it was suspiciously often with comments that revealed misunderstandings. Once the grant-reviewer even explicitly admitted that they didn’t understand what it was that I proposed to do. I then started to simplify my proposals, in my desperation I of course did what many online commenters proposed: I added what I thought was very basic knowledge. My proposals became significantly more understandable, but also significantly longer. Imagine my disappointment, when the feedback I received then was twofold: the reviewers felt insulted I addressed them at such a basic level and the funder told me my proposals were too long: “good proposals are succinct, maybe 10-15 pages total, and compelling”.

So here’s the rule then: you need to write your proposal in a way such that your grandma can understand it, without the reviewers noticing that you are insulting their intelligence and with no more than 1-2 sentences per explanation.

Honestly, I find this quite far beyond my capabilities. Instead, I have since focused on the easier task of being succinct at the expense of explaining everything. For the last ~8 years I’ve assumed that the people reading my proposals are either experts in the model system(s) I use or in the problems I study, not both. The implicit expectation is that the former don’t need to understand every motivation behind each experiment (and won’t require it, either) and the latter won’t be too concerned with the technical details of a model system they might not be all that familiar with. Until this last proposal, this has worked to the extent that even for the ~80% of rejections I received, the reviewer comments revealed neither obvious incompetence nor substantial misunderstandings. However, given the system by which reviewers are selected, it is of course impossible to know if this was due to my improved writing or due to the chosen reviewers. Moreover, the field as grown substantially and become much more popular in this time, so it simply may have been down to a larger pool of experts than a decade ago.

It is also important to keep in mind that with each submission even of the same grant, there may be different reviewers assigned. At the ERC, for instance, one of my proposals was rejected despite the reviewers being very enthusiastic about the idea, but because they questioned the feasibility of the method. In the revision, the reviewers thought the method was too established to warrant funding and the research question wasn’t all that interesting, either.

There were two very helpful comments in the Twitter discussion that I will keep in mind for future proposals, both were from Peg AtKisson, a professional supporter of grant writers:

@brembs Disagree because diverse inputs tend to lead to better outcomes. Only experts reviewing in narrow area leads to in-group bias.

— M. S. AtKisson (@iGrrrl) December 3, 2015

I agree that minimizing in-group bias is a goal worth investing in. However, this goal comes at a cost (which is an investment, I’d argue): you can’t have non-experts review and expect the author to not need more words for it. You also have to accept that if you promote non-expert review, you may annoy the experts with more verbose applications. If there are no explicit instructions, it virtually impossible to know where on this trade-off one has to land.

@brembs "We have chosen X approach because… We did not chose Y because…" Show your thinking. @drugmonkeyblog

— M. S. AtKisson (@iGrrrl) December 3, 2015

The suggestion to also explicitly mention methods that you rejected because they are unsuitable is one worth pondering over. If there are no word-limits, this sounds very compelling as it “shows your thinking” which is always helpful. It is also quite difficult to decide which ones to include, as it, again, involves the risk of insulting the reviewers (i.e., “only an idiot would have thought to use approach Y!”). Again, the instructions from the funder and experience will have to suffice, but I’ll definitely spend more time thinking about alternative, less suitable approaches next time.

Back to our particular case, the main issues can be boiled down to three criticisms. Since all of them concern the technical details of our experimental techniques, it is fair to assume that the person considers themselves competent at least on the technical/model system side of the proposal.

The first issue concerns a common laboratory technique which I have taught to undergraduate classes, which is widely used not only in our field but in all biological/biomedical research generally, for which Fire and Mello received the Nobel prize in 2006 and where all the technical details required for this grant are covered on the Wikipedia page (of course it’s also in all textbooks). Nothing beyond this basic understanding is required for our grant. The criticisms raised only make sense if the reviewer is not aware of the sequestration/degradation distinction.

The second concerns an even older technique which is also used in all biological/biomedical research, for which the Nobel was handed out already in 1984, the technical info is also on the Wikipedia page and it’s of course part of every undergraduate biology/medicine education I know of. Moreover, this technology is currently debated in the most visible places in the wake of the biomedical replicability crisis. Nothing beyond this most basic understanding is required for our proposal. The criticisms of the reviewer only make sense if the reviewer is not aware of the differences between monoclonal and polyclonal antibodies.

From where I sit, this kind of basic knowledge is what can be expected from a reviewer who picks these two methods (out of the four main grant objectives) as their target for criticism.

The third issue can be seen as a reviewer classic: the reviewer chided us for proposing a method we didn’t propose and suggested we instead use a method we already had prominently described in our proposal, even with a dedicated figure to make unambiguously clear we weren’t proposing the technique the reviewer rightfully rejected, but the one they recommended. Here, everything the reviewer wrote was correct, but so was our proposal: it paralleled what they wrote.

In summary: of our four objectives in the grant, this reviewer picked three for criticism. Two of the three criticisms lack undergraduate knowledge of very common, widespread techniques. The third issue is nonexistent, as the grant already describes, prominently, what the reviewer suggests. I will take the online suggestions and incorporate them into the revised version of the grant, but there really isn’t anything helpful at all one can take from this particular review. For me personally, at this time, this is an exception, but it chimes with what a lot of colleagues, on both sides of the pond, complain about.

Like this:

Like Loading...
Posted on December 4, 2015 at 14:48 21 Comments
  • Page 10 of 21
  • « First
  • «
  • 8
  • 9
  • 10
  • 11
  • 12
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,801 views)
  • Sci-Hub as necessary, effective civil disobedience (22,938 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,455 views)
  • Booming university administrations (12,900 views)
  • What should a modern scientific infrastructure look like? (11,430 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous biting in the marine snail Aplysia
Spontaneous biting in the marine snail Aplysia

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d