bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 170 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 89 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 197 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 503 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 750 downloads 0.00 KB
Download
Jan12

Even without retractions, ‘top’ journals publish the least reliable science

In: science politics • Tags: impact factor, journal rank, publishing, retractions

tl;dr: Data from thousands of non-retracted articles indicate that experiments published in higher-ranking journals are less reliable than those reported in ‘lesser’ journals.

Vox health reporter Julia Belluz has recently covered the reliability of peer-review. In her follow-up piece, she asked “Do prestigious science journals attract bad science?“. However, she only covered the data on retractions, not the much less confounded data on the remaining, non-retracted literature. It is indeed interesting how everyone seems to be attracted to the retraction data like a moth to the flame. Perhaps it’s because retractions constitute a form of ‘capital punishment’, they seem to reek of misconduct or outright fraud, which is probably why everybody becomes so attracted – and not just journalists, scientists as well, I must say. In an email, she explained that for a lay audience, retractions are of course much easier to grasp than complicated, often statistical concepts and data.

However, retractions suffer from two major flaws which make them rather useless as evidence base for any policy:

I. They only concern about .05% of the literature (perhaps an infinitesimal fraction more for the ‘top’ journals 🙂
II. This already unrepresentative, small sample is further confounded by error-detection variables that are hard to trace.

Personally, I tentatively interpret what scant data we have on retractions as suggestive that increased scrutiny may only play a minor role in a combination of several factors leading to more retractions in higher ranking journals, but I may be wrong. Indeed, we emphasize in several places in our article on precisely this topic that retractions are rare and hence one shouldn’t place so much emphasis on them, e.g.:
“These data, however, cover only the small fraction of publications that have been retracted. More important is the large body of the literature that is not retracted and thus actively being used by the scientific community.”
Given the attraction of such highly confounded data, perhaps we should not have mentioned retraction data at all. Hindsight being 20/20 and all that…

Anyway, because of these considerations, the majority of our article is actually about the data concerning the non-retracted literature (i.e., the other 99.95%). In contrast to retractions, these data do not suffer from any of the above two limitations: we have millions and millions of papers to analyze and since all of them are still public, there is no systemic problem of error-detection confounds.

For instance, we review articles that suggest that (links to articles in our paper):

1. Criteria for evidence-based medicine are no more likely to be met in higher vs. lower ranking journals:
Obremskey et al., 2005; Lau and Samman, 2007; Bain and Myles, 2005; Tressoldi et al., 2013

2. There is no correlation between statistical power and journal rank in neuroscience studies:
Figure 2:

Fig-2

3. Higher ranking journals tend to publish overestimates of true effect sizes from experiments where the sample sizes are too low in gene-association studies:
Figure 1C:

effectsizeoverestimation

4. Three studies analyzing replicability in biomedical research and found it to be extremely low, not even top journals stand out:
Scott et al., 2008; Prinz et al., 2011; Begley and Ellis, 2012

5. Where quality can actually be quantified, such as in computer models of crystallography work, ‘top’ journals come out significantly worse than other journals:
esp. Fig. 3 in Brown and Ramaswamy, 2007

chrystallography
After our review was published, a study came out which showed that

6. In vivo animal experimentation studies are less randomized in higher ranking journals and the outcomes are not scored more often in blind in higher-ranking journals either:

reporting

Hence, in these six (nine including the update below) areas, unconfounded data covering orders of magnitude more material than the confounded retraction data reveal only two out of three possible general outcomes:

a) Non-retracted experiments reported in high-ranking journals are no more methodologically sound than those published in other journals.
b) Non-retracted experiments reported in high-ranking journals are less methodologically sound than those published in other journals

Not a single study we know of (there may be some we missed! Let me know.) shows the third option of higher-ranking journals publishing the most sound experiments. It is this third option that at least one analysis should have found somewhere if there was anything to journal rank with regard to reliability.

Hence, even if you completely ignore the highly scattered and confounded retraction data, experiments published in higher ranking journals are still less reliable than those published in lower ranking journals – and error-detection or scrutiny has nothing to do with it.
In that view, one may interpret the observation of more retractions in higher ranking journals as merely a logical consequence of the worse methodology there, nothing more. This effect may then, in turn, be somewhat exaggerated because of higher scrutiny, but we don’t have any data on that.

All of this data is peer-reviewed and several expert peers attested that none of the data in our review is in dispute. It will be interesting to see if Ms. Belluz will remain interested enough to try and condense such much more sophisticated evidence into a form for a lay audience. 🙂

UPDATE (9/9/2016): Since the publication of this post, two additional studies have appeared that further corroborate the impression that the highest ranking journals publish the least reliable science: In the field of genetics, it appears that errors in gene names (and accession numbers) introduced by the usage of Excel spreadsheets are more common in higher ranking journals:

excel_errors

The authors speculate that the correlation they found is due to higher ranking journals publishing larger gene collections. This explanation, if correct, would suggest that, on average, error detection in such journals is at least not superior to that in other journals.

The second study is on the statistical power of cognitive neuroscience and psychology experiments. The authors report that statistical power has been declining since the 1960s and that statistical power is negatively correlated with journal rank (i.e., a reproduction of the work above, with an even worse outcome). Moreover, the fraction of errors in calculating p-values is positively correlated with journal rank, both in terms of records and articles (even though I have to point out that the y-axis does not start from zero!):

errorsThus, there are at least three additional measures in these articles that provide additional evidence supporting the interpretation that the highest ranking journals publish the least reliable science.

UPDATE II (9/5/2017): Since the last update, there has been at least one additional study comparing the work in journals with different impact factors. In the latest work, the authors compared the p-values in two different psychology journals for signs of p-hacking and other questionable research practices. Dovetailing the data available so far, the authors find that the journal with the higher impact factor (5.0) contained more such indicators, i.e., showed more signs for questionable research practices than the journal with a lower impact factor (0.8). Apparently, every new study reveals yet another filed and yet another metric in which high-ranking journals fail to provide any evidence for their high rank.

 

UPDATE III (07/03/2018): An edited and peer-reviewed version of this post is now available as a scholarly journal article.

Like this:

Like Loading...
Posted on January 12, 2016 at 10:09 228 Comments
Jan08

Just how widespread are impact factor negotiations?

In: science politics • Tags: impact factor, journal rank, publishing

Over the last decade or two, there have been multiple accounts of how publishers have negotiated the impact factors of their journals with the “Institute for Scientific Information” (ISI), both before it was bought by Thomson Reuters and after. This is commonly done by negotiating the articles in the denominator. To my knowledge, the first ones to point out that this may be going on for at least hundreds of journals were Moed and van Leeuven as early as 1995 (and with more data again in 1996). One of the first accounts to show how a single journal accomplished this feat were Baylis et al. in 1999 with their example of FASEB journal managing to convince the ISI to remove their conference abstracts from the denominator, leading to a jump in its impact factor from 0.24 in 1988 to 18.3 in 1989. Another well-documented case is that of Current Biology whose impact factor increased by 40% after acquisition by Elsevier in 2001. To my knowledge the first and so far only openly disclosed case of such negotiations was PLoS Medicine’s editorial about their negotiations with Thomson Reuters in 2006, where the negotiation range spanned 2-11 (they settled for 8.4). Obviously, such direct evidence of negotiations is exceedingly rare and usually publishers are quick to point out that they never would be ‘negotiating’ with Thomson Reuters, they would merely ask them to ‘correct’ or ‘adjust’ the impact factors of their journals to make them more accurate. Given that already Moed and van Leeuwen found that most such corrections seemed to increase the impact factor, it appears that these corrections only take place if a publisher considers their IF too low and only very rarely indeed if the IF may appear too high (and who would blame them?). Besides the old data from Moed and van Leeuwen, we have very little data as to how widespread this practice really is.

A recent analysis of 22 cell biology journals now provides additional data in line with Moed and van Leeuwen’s initial suspicion that publishers may take advantage of the possibility of such ‘corrections’ on a rather widespread basis. If any errors by Thomson Reuters’ ISI happened randomly and were corrected in an unbiased fashion, then an independent analysis of the available citation data should show that such independently calculated impact factors correlate with the published impact factor with both positive and negative errors. If, however, corrections only ever occur in the direction that increases the impact factor of the corrected journal, then the published impact factors should be higher than the independently calculated ones. The reason for such a bias should be found in missing numbers of articles in the denominator of the published impact factor. These ‘missing’ articles can nevertheless be found, as they have been published, just not counted in the denominator. Interestingly, this is exactly what Steve Royle found in his analysis (click on the image for a larger version):

if_negotiations

On the left, you see that any deviation from the perfect correlation is always towards the larger impact factor and on the right you can see that some journals show a massive number of missing articles.

Clearly, none of this amounts to unambiguous evidence that publishers are increasing their editorial ‘front matter’ both to cite their own articles and to receive citations from outside, only to then tell Thomson Reuters to correct their records. None of this is proof that publishers routinely negotiate with the ISI to inflate their impact factors, let alone that publishers routinely try to make classification of their articles as citable or not intentionally difficult. There are numerous alternative explanations. However, personally, I find the two old Moed and Van Leeuwen papers and this new analysis, together with the commonly acknowledged issue of paper classification by the ISI just about enough to be suggestive, but I am probably biased.

Like this:

Like Loading...
Posted on January 8, 2016 at 15:41 83 Comments
Jan07

How much should a scholarly article cost the taxpayer?

In: science politics • Tags: infrastructure, publishing

tl;dr: It is a waste to spend more than the equivalent of US$100 in tax funds on a scholarly article.

Collectively, the world’s public purse currently spends the equivalent of US$~10b every year on scholarly journal publishing. Dividing that by the roughly two million articles published annually, you arrive at an average cost per scholarly journal article of about US$5,000.

Inasmuch as these legacy articles are behind paywalls, the average tax payer does not get to see what they pay for. Even worse for academics: besides not being able to access all the relevant literature either, cash-strapped public institutions are sorely missing the subscription funds, which could have modernized their digital infrastructure. Consequently, researchers at most public institutions are stuck with technology that is essentially from the 1990s, specifically with regard to infrastructure taking care of their three main forms of output: text, data and code.

Another pernicious consequence of this state of affairs: institutions have been stuck with a pre-digital strategy for hiring and promoting their faculty, namely judging them by the venues of their articles. As the most prestigious journals publish, on average, the least reliable science, but the scientists who publish there are awarded with the best positions (and are, in turn, training their students how to publish their unreliable work in these journals), science is now facing a replication crisis of epic proportions: most published research may possibly be false.

Thus, both the scientific community and the public have more than one reason to try and free some of the funds currently wasted on legacy publishing. Consequently, there are a few new players on the publishing market who offer their services for considerably less. Not surprisingly, in developing countries, where cash is even more of an issue, already more than 15 years ago a publicly financed solution was developed (SciELO) that publishes fully accessible articles at a cost of between US$70-200, depending on various technical details. In the following 15 years, problems have accumulated now also in the richer countries, prompting the emergence of new publishers. Also for these, the ballpark price range from just under US$100 to under US$500 per article is quoted by some of these newer publishers/service providers such as Scholastica, Ubiquity, RIO Journal, Science Open, F1000Research, PeerJ or Hindawi. Representatives of all of these publishers independently tell me that their costs per article range in the low hundreds and Ubiquity, Hindawi and PeerJ are even on record with this price range. [After this post was published, Martin Eve of the Open Library of the Humanities also quoted roughly these costs for their enterprise. I have also been pointed to another article who sets about US$300 per article as an upper bound, also along the lines of all the other sources.]

Tweet link.

Now, as a welcome confirmation, yet another company, Standard Analytics, comes to similar costs in their recent analysis.

Specifically, they computed the ‘marginal’ costs of an article, which they define as only taking “into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations“. I understand this to mean that if an existing publisher wanted to start a new scholarly journal, these would be the additional costs they would have to recoup. The authors mention five main tasks to be covered by these costs:

1) submission

2) management of editorial workflow and peer review

3) typesetting

4) DOI registration

5) long-term preservation.

They calculate two versions of how these costs may accrue. One method is to outsource these services to existing vendors. They calculate prices using different vendors that range between US$69-318, hitting exactly the ballpark all the other publishers have been quoting for some time now. Given that public institutions are bound to choose the lowest bidder, anything above the equivalent of around US$100 would probably be illegal. Let alone 5k.

However, as public institutions are not (yet?) in a position to competitively advertise their publishing needs, let’s consider the side of the publisher: if you are a publisher with other journals and are shopping around for services to provide you with an open access journal, all you need to factor in is some marginal additional part-time editorial labor for your new journal and a few hundred dollars per article. Given that existing publishers charge, on average, around €2,000 per open access article, it is safe to say that, as in subscription publishing, scientists and the public are being had by publishers, as usual, even in the case of so-called ‘gold’ open access publishing. These numbers also show, as argued before, that just ‘flipping’ our journals all to open access is at best a short-term stop-gap measure. At worst, it would deteriorate the current situation even more.

Be that as it may, I find Standard Analytics’ second calculation to be even more interesting. This calculation actually conveys an insight that was entirely new, at least for me: if public institutions decided to run the 5 steps above in-house, i.e., as part of a modern scholarly infrastructure, per article marginal costs would actually drop to below US$2. In other words, the number of articles completely ceases to be a monetary issue at all. In his critique of the Standard Analytics piece, Cameron Neylon indicated, with his usual competence and astuteness, that of course some of the main costs of scholarly communication aren’t really the marginal costs that can be captured on a per-article basis. What requires investment are, first and foremost, standards according to which scholarly content (text/audio/video: narrative, data and code) is archived and made available. The money we are currently wasting on subscriptions ought to be invested in an infrastructure where each institution has the choice of outsourcing vs. hiring expertise themselves. If the experience of the past 20 years of networked digitization is anything to go by, then we need to invest these US$10b/a in an infrastructure that keeps scholarly content under scholarly control and allows institutions the same decisions as they have in other parts of their infrastructure: hire plumbers, or get a company to show up. Hire hosting space at a provider, or put servers into computing centers. Or any combination thereof.

What we are stuck with today is nothing but an obscenely expensive anachronism that we need to dispense of.

By now, it has become quite obvious that we have nothing to lose, neither in terms of scholarly nor of monetary value, but everything to gain from taking these wasted subscription funds and investing them  to bring public institutions into the 21st century. On the contrary, every year we keep procrastinating, another US$10b go down the drain and are lost to academia forever. On the grand scheme of things, US$10b may seem like pocket change. For the public institutions spending them each year, they would constitute a windfall: given that the 2m articles we currently publish would not even cost US$4m, we would have in excess of US$9.996b to spend each year on an infrastructure serving only a few million users. As an added benefit, each institution would be getting back in charge of their own budget decisions – rather than having to negotiate with monopolistic publishers. Given the price of labor, hard- and software, this would easily buy us all the bells and whistles of modern digital technology, with plenty to spare.

Like this:

Like Loading...
Posted on January 7, 2016 at 13:42 90 Comments
Dec17

How free are academics, really?

In: science politics • Tags: academic freedom, DFG, funding, institutions, open access, publishing

In Germany, the constitution guarantees academic freedom in article 5 as a basic civil right. The main German funder, the German Research Foundation (DFG), routinely points to this article of the German constitution when someone suggests they should follow the lead of NIH, Wellcome et al. with regard to mandates requiring open access (OA) to publications arising from research activities they fund.

The same argument was recently made by Rick Anderson in his THE article entitled “Open Access and Academic Freedom“. When it was pointed out both in the comments on his article and on Twitter that the widespread tradition of hiring/promoting/rewarding scientists for publishing in certain venues constituted a much worse infringement, Mr. Anderson replied with a very formalistic argument: such selection by publication venue were mere “professional standards”, which, by definition, cannot impede academic freedom (even if they essentially force scientists to publish in certain venues), while only “official policies” actually can infringe on academic freedom (even if they are not actually enforced, such as many OA mandates, and thus have little/no effect on the choice of publication venue). This potential infringement is thus considered more of a threat to academic freedom than actual infringements, as long as the actual infringements due to ‘professional standards’ are not explicitly written down somewhere and labeled ‘policy’.

While one may take such a formalistic approach, I fail to see how such a position can be either valuable or convincing.

If everyone took that position, it would only mean that our institutions would make their policies less specific and called them “professional standard”. Then our institutions can fire us at will without ever touching our academic freedom: they just need to define professional standards loosely enough. Hence, there is no academic value in such a formalistic approach: an infringement of an academic freedom is always an infringement, no matter what you call it. The important aspect of such infringements (which may be unavoidable) is not whether or not they are written down as explicit ‘policy’, but that we must have very good reasons for them, such as tangible benefits for science and/or society.

I also doubt this argument will be very convincing, as it is just too plain obvious that such a formalistic approach is too far from reality to be worth seriously considering. Imagine two scholars, both working in the same field, collecting the same data, making the same discoveries and solving the same problems. One of them would feel forced to publish, against their own will, their work in a piecemeal fashion in the venues implicitly prescribed by their field on an ongoing basis, the other would decide to exercise their academic freedom and publish the exact same discoveries and solutions in one big text on their blog at the end of their career. From this example, it is clear that we already have a very tangible and real choice: either exercise your academic freedom or have a job in academia, both are incompatible.

Today, it seems unavoidable that the current society likely won’t accept the value of the “full academic freedom” of the AAUP that Rick Anderson referenced, and hence won’t tolerate us exercising it. But this society better provide some pretty darn good reasons for curtailing our ‘full’ civil rights. I can see how forcing us to share our work with the society that funded us would entail such a reason. I cannot see how forcing us to publish, e.g., in venues with a track record for fraud and errors would entail such a reason.

Like this:

Like Loading...
Posted on December 17, 2015 at 15:22 44 Comments
Dec04

How to write your grant proposal?

In: science politics • Tags: funding, grantsmanship, peer-review

Posting my reply to a review of our most recent grant proposal has sparked an online discussion both on Twitter and on Drugmonkey’s blog. The main direction the discussion took was what level of expertise to expect from the reviewers deciding over your grant proposal.

This, of course, is highly dependent on the procedure by which the funding agency chooses the reviewers. At the US-based NIH, as I understand it, reviewers are picked from a known panel, you just don’t know which individuals of the panel. This makes it comparatively easy to target this audience when writing your proposal. The German funding agency we submitted our grant to picks any reviewer, world-wide (I think the NSF in the US is similar, at least I have reviewed a few grants for them). After the review, a separate panel of peers (which may but commonly doesn’t include a reviewer) decides which grants out of the pool get funded, usually without reading the grants in much detail, if at all. In that case, it is impossible to have a clear picture of your audience.

My first grant/fellowship was funded in 2001. Before and since then, I have had plenty of rejections. I believe my overall funding rate over these almost 15 years is somewhere around 20±5%, which means I must have written just under 50 grant proposals in this time. Initially, when my proposals (and manuscripts) got rejected, it was suspiciously often with comments that revealed misunderstandings. Once the grant-reviewer even explicitly admitted that they didn’t understand what it was that I proposed to do. I then started to simplify my proposals, in my desperation I of course did what many online commenters proposed: I added what I thought was very basic knowledge. My proposals became significantly more understandable, but also significantly longer. Imagine my disappointment, when the feedback I received then was twofold: the reviewers felt insulted I addressed them at such a basic level and the funder told me my proposals were too long: “good proposals are succinct, maybe 10-15 pages total, and compelling”.

So here’s the rule then: you need to write your proposal in a way such that your grandma can understand it, without the reviewers noticing that you are insulting their intelligence and with no more than 1-2 sentences per explanation.

Honestly, I find this quite far beyond my capabilities. Instead, I have since focused on the easier task of being succinct at the expense of explaining everything. For the last ~8 years I’ve assumed that the people reading my proposals are either experts in the model system(s) I use or in the problems I study, not both. The implicit expectation is that the former don’t need to understand every motivation behind each experiment (and won’t require it, either) and the latter won’t be too concerned with the technical details of a model system they might not be all that familiar with. Until this last proposal, this has worked to the extent that even for the ~80% of rejections I received, the reviewer comments revealed neither obvious incompetence nor substantial misunderstandings. However, given the system by which reviewers are selected, it is of course impossible to know if this was due to my improved writing or due to the chosen reviewers. Moreover, the field as grown substantially and become much more popular in this time, so it simply may have been down to a larger pool of experts than a decade ago.

It is also important to keep in mind that with each submission even of the same grant, there may be different reviewers assigned. At the ERC, for instance, one of my proposals was rejected despite the reviewers being very enthusiastic about the idea, but because they questioned the feasibility of the method. In the revision, the reviewers thought the method was too established to warrant funding and the research question wasn’t all that interesting, either.

There were two very helpful comments in the Twitter discussion that I will keep in mind for future proposals, both were from Peg AtKisson, a professional supporter of grant writers:

@brembs Disagree because diverse inputs tend to lead to better outcomes. Only experts reviewing in narrow area leads to in-group bias.

— M. S. AtKisson (@iGrrrl) December 3, 2015

I agree that minimizing in-group bias is a goal worth investing in. However, this goal comes at a cost (which is an investment, I’d argue): you can’t have non-experts review and expect the author to not need more words for it. You also have to accept that if you promote non-expert review, you may annoy the experts with more verbose applications. If there are no explicit instructions, it virtually impossible to know where on this trade-off one has to land.

@brembs "We have chosen X approach because… We did not chose Y because…" Show your thinking. @drugmonkeyblog

— M. S. AtKisson (@iGrrrl) December 3, 2015

The suggestion to also explicitly mention methods that you rejected because they are unsuitable is one worth pondering over. If there are no word-limits, this sounds very compelling as it “shows your thinking” which is always helpful. It is also quite difficult to decide which ones to include, as it, again, involves the risk of insulting the reviewers (i.e., “only an idiot would have thought to use approach Y!”). Again, the instructions from the funder and experience will have to suffice, but I’ll definitely spend more time thinking about alternative, less suitable approaches next time.

Back to our particular case, the main issues can be boiled down to three criticisms. Since all of them concern the technical details of our experimental techniques, it is fair to assume that the person considers themselves competent at least on the technical/model system side of the proposal.

The first issue concerns a common laboratory technique which I have taught to undergraduate classes, which is widely used not only in our field but in all biological/biomedical research generally, for which Fire and Mello received the Nobel prize in 2006 and where all the technical details required for this grant are covered on the Wikipedia page (of course it’s also in all textbooks). Nothing beyond this basic understanding is required for our grant. The criticisms raised only make sense if the reviewer is not aware of the sequestration/degradation distinction.

The second concerns an even older technique which is also used in all biological/biomedical research, for which the Nobel was handed out already in 1984, the technical info is also on the Wikipedia page and it’s of course part of every undergraduate biology/medicine education I know of. Moreover, this technology is currently debated in the most visible places in the wake of the biomedical replicability crisis. Nothing beyond this most basic understanding is required for our proposal. The criticisms of the reviewer only make sense if the reviewer is not aware of the differences between monoclonal and polyclonal antibodies.

From where I sit, this kind of basic knowledge is what can be expected from a reviewer who picks these two methods (out of the four main grant objectives) as their target for criticism.

The third issue can be seen as a reviewer classic: the reviewer chided us for proposing a method we didn’t propose and suggested we instead use a method we already had prominently described in our proposal, even with a dedicated figure to make unambiguously clear we weren’t proposing the technique the reviewer rightfully rejected, but the one they recommended. Here, everything the reviewer wrote was correct, but so was our proposal: it paralleled what they wrote.

In summary: of our four objectives in the grant, this reviewer picked three for criticism. Two of the three criticisms lack undergraduate knowledge of very common, widespread techniques. The third issue is nonexistent, as the grant already describes, prominently, what the reviewer suggests. I will take the online suggestions and incorporate them into the revised version of the grant, but there really isn’t anything helpful at all one can take from this particular review. For me personally, at this time, this is an exception, but it chimes with what a lot of colleagues, on both sides of the pond, complain about.

Like this:

Like Loading...
Posted on December 4, 2015 at 14:48 21 Comments
Dec01

Why cutting down on peer-review will improve it

In: science politics • Tags: grants, peer-review

Update, Dec. 4, 2015: With the online discussion moving towards grantsmanship and the decision of what level of expertise to expect from a reviewer, I have written down some thoughts on this angle of the discussion.

With more and more evaluations, assessments and quality control, the peer-review burden has skyrocketed in recent years. Depending on field and tradition, we write reviews on manuscripts, grant proposals, Bachelor-, Masters- and PhD-theses, students, professors, departments or entire universities. Top reviewers at Publons clock in at between 0.5-2 reviews for every day of the year. It is conceivable that with such a frequency, reviews cannot be very thorough, or the material to be reviewed is comparatively less complex or deep. But already at a much lower frequency, time constraints imposed by increasing reviewer load make thorough reviews of complex material difficult. Hyper-competitive funding situations add incentives to summarily dismiss work perceived as infringing on one’s own research. It is hence not surprising that such conditions bring out the worst in otherwise well-meaning scientists.

Take for instance a recent grant proposal of mine, based on our recent paper on FoxP in operant self-learning. While one of the reviewers provided reasonable feedback, the other raised issues that can be shown to either be demonstrably baseless or already included in the application. I will try to show below how this reviewer, who obviously has some vague knowledge of the field in general, but not nearly enough expertise to review our proposal, should have either declined to review or at least invested some time reading the relevant literature as well as the proposal in more depth.

The reviewer writes (full review text posted on thinklab):

In flies, the only ortholog [sic] FoxP has been recently analyzed in several studies. In a report by the Miesenböck lab published last year in Science, a transposon induced mutant affecting one of the two (or three) isoforms of the FoxP gene was used to show a requirement of FoxP in decision making processes.

Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP. For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In principle, this proposal addresses important and highly relevant questions but unfortunately there are many (!) problems with this application which make it rather weak and in no case fundable.

Unfortunately, there are many problems with this review which make it rather weak and in no case worthy of consideration for a revised version of the proposal.

The preliminary work mentioned in this proposal is odd. Basically we learn that there are many RNAi lines available in the stock centers, which have a phenotype when used to silence FoxP expression but strangely do not affect FoxP expression. What does this mean?

Had Reviewer #1 been an expert in the field, they would have been aware of the RNAi issues concerning template mismatch and the selection of targeted mRNA for sequestration and degradation, respectively. For the non-expert, we explain this issue with further references in our own FoxP paper and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

I have seen no arguments why the generation of additional RNAi strains is now all the sudden expected to yield a breakthrough result.

Had Reviewer #1 been an expert in the field, they would be aware that the lines we tested were generated as part of large-scale efforts to manipulate every gene in the Drosophila genome. As such, the constructs were generated against the reference genome, which of course does not precisely match every potential strain used in every laboratory, as any expert in the field is very well aware of (explained in more detail in this blog post). Consequently, RNAi constructs directed at the specific strain used for genetic manipulation and subsequent crossing of all driver lines into this genetic background (as is the well-established technique in the collaborating Schneuwly laboratory), reliably yields constructs that lead to mRNA degradation, rather than sequestration. This discussion leaves out the known tendency of the available strains for off-target effects, compounding their problems. Dedicated RNAi constructs, such as the ones I propose to use, can be tested against off-targets beforehand.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Quite similar we learn in the preliminary result section that many attempts to generate specific antibodies failed and yet the generation of mAbs is proposed. Again, it is unclear what we will learn and alternative strategies are not even discussed.

Had Reviewer #1 been an expert in the field, they would understand the differences between polyclonal and monoclonal antibodies, in particular as the antibody technology is currently particularly hotly debated in rather prominent locations.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

The authors could consider the generation of HA-tagged Fosmids /I minigenes or could use homologous recombination to manipulate the gene locus accordingly.

Had Reviewer #1 not overlooked our Fig. 5., as well as our citations of Vilain et al. as well as Zhang et al., it may not have gone unnoticed that this type of genome editing is precisely what we propose to do.

One page 2 of the application it is stated that “It is a technical hurdle for further mechanistic study of operant self-learning that the currently available FoxP mutant lines are insertion lines, which only affect the expression level of some of the isoforms. ” This is not true! and the applicant himself states on page 11: “However, as the Mi{MIC} insertion is contained within a coding exon which is spliced into all FoxP isoforms, it is likely that this insertion alone already leads to a null mutation at the FoxP locus.” Yes, by all means the insertion of a large transposon into the open reading frame of a gene causes a mutation!!!! Why this allele, which is available in the stock centers, has not yet been analyzed so far remains mysterious.

Had Reviewer #1 actually engaged with our proposal, this would remain a mystery to them no longer: the analysis of this strain is part of our proposal. If it had been possible to analyze this strain without this proposal, the proposal would not have been written. Had Reviewer #1 ever written a research proposal of their own, they would understand that proposals are written to fund experiments that have not yet been performed. Hence, Reviewer #1 is indeed part of the answer: without their unqualified dismissal of our proposal, we would already be closer to analyzing this strain.

Moreover, reading the entire third section of this application “genome editing using MiMIC” reveals that the applicant has not understood the rational behind the MiMIC technique at all. Venken et al clearly published that “Insertions (of the Minos-based MiMIC transposon) in coding introns can be exchanged with protein-tag cassettes to create fusion proteins to follow protein expression and perform biochemical experiments.” Importantly, insertions have to be in an intron!!!! The entire paragraph demonstrates the careless generation of this application. “we will characterize the expression of eGFP in the MiMIC transposen”. Again, a short look into the Venken et aI., paper demonstrates the uselessness of this approach.

Reading this entire paragraph reveals that Reviewer #1 has neither noticed Fig. 5 in the proposal, nor understood that we do not follow Venken et al. in our proposal (which is the reason we do not even cite Venken et al.), but Vilain et al. and Zhang et al. Precisely because the methods explained in Venken et al. do not work in our case, we will follow Vilain et al. and Zhang et al., where this is not an issue. Venken et al. are not cited in the proposal, as we expect the reviewers to be expert peers. Discussing and explaining such issues at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Moreover, just a few weeks ago, at the RMCE session of a meeting, I attended a presentation of the senior author of Zhang et al., Frank Schnorrer, where he essentially explained their method, which I proposed (see Fig. 5 in the proposal). He later confirmed that there are no problems with using their RMCE approach for the specific case of the FoxP gene with the insertion in an exon. Hence, the presentation of Dr. Schnorrer as well as my later discussion with him confirmed the suspicion that Reviewer #1 lacks not only the expertise in the current methods, but also failed to notice the alternative methods by Zhang et al. and Vilain et al. even though we cite these publications and provide an entire figure detailing the approach on top of the citation and explanations in the text.

Finally, had Reviewer #1 been an expert in the field, they would be aware that the laboratory of Hugo Bellen is currently generating intron-based MiMIC lines for all those lines where the MiMIC cassette happened to insert elsewhere. Our statement in the proposal comes to mind in this respect: “In fact, by the time this project will commence, there will likely be a genome editing method published, which is even more effective and efficient than the ones cited above. In this case, we will of course use that method.”

The application requests two students. Although the entire application is far from being fundable, this request adds the dot on the i. The student is planned for the characterization of lines that are not available, characterization of antibodies that likely will not be on hand in the next two years and so on. In summary, this is a not well prepared application, full of mistakes and lacking some necessary preliminary data.

Had Reviewer #1 been an expert in the field, they would know that performing the kind of behavioral experiments we propose requires training and practice – time which is not required for applying established transgenic techniques. Thus, there is already a time lag between generating lines and testing them, inherent to the more time-intensive training required for behavioral experiments. This time lag can be supported and extended by hiring one student first and the second somewhat later.

In addition, as emphasized by Reviewer #1 themselves (and outlined in our proposal), there are still lines available that have not been thoroughly characterized, yet, such that any missing lag can easily be filled with characterizing these strains. If any of the available strains show useful characteristics, the corresponding new lines do not have to be generated. Moreover, many of the available transgenic lines also need to be characterized on the anatomical level as well (also outlined in the proposal).

Finally, by the time this project can commence, given the projects in the other groups working on FoxP, there will likely be yet new lines, generated elsewhere, that also warrant behavioral and/or anatomical characterization. Thus, the situation remains as described in the proposal: two students with complementary interests and training are required for our proposal and a small initial lag between the students is perfectly sufficient to accommodate both project areas.

In this way, one can expect at least one year in which the first student can start generating new lines at a time when the second student either has not started yet, is training or is testing lines that already exist.

These issues are only briefly discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In summary, I could not find any issue raised in this review that is not either generally known in the field, or covered either in the literature, or in our proposal. Hence, I regret to conclude that there is not a single issue raised by Reviewer #1 that I would be able to address in my revised proposal. The proposal may not be without its flaws and the other reviewer was able to contribute some valuable suggestions, so I’ve put it out on thinklab for everyone to compare it to the review and contribute meaningful and helpful criticism. Unqualified dismissal of the type shown above only unnecessarily delays science and may derail the careers of the students who hoped to be working on this project.

If we all had less material to review, perhaps also Reviewer #1 above would take the time and read the literature as well as the proposal, before writing their review. But perhaps I have it all wrong and Reviewer #1 was right to dismiss the proposal like they did? If so, you are now in a position to let me know as both the proposal and the review are open and comments are invited. Perhaps making all peer-review this open can help reduce the incidence of such reviews, even if the amount of reviewing cannot be significantly reduced?

Like this:

Like Loading...
Posted on December 1, 2015 at 18:13 56 Comments
Nov25

Data Diving for Genomics Treasure

In: own data • Tags: Drosophila, evolution, open data, transposons

This is a post written jointly by Nelson Lau from Brandeis and me, Björn Brembs. In contrast to Nelson’s guest post, which focused on the open data aspect of our collaboration, this one describes the science behind our paper and a second one by Nelson, which just appeared in PLoS Genetics.

ResearchBlogging.orgLaboratories around the world are generating a tsunami of deep-sequencing data from nearly every organism, past and present. These sequencing data range from entire genomes to segments of chromatin to RNA transcripts. To explore this ocean of “BIG DATA”, one has to navigate through portals of the National Computational Biotechnology Institute’s (NCBI’s) two signature repositories, the Sequencing Read Archive (SRA) and the Gene Expression Omnibus (GEO). With the right bioinformatics tools, scientists can explore and discover freely-available data that can lead to valuable new biological insights.

Nelson Lau’s lab in the Department of Biology at Brandeis has recently completed two such successful voyages into the realm of genomics data mining, with studies published in the Open Access journals of Nucleic Acids Research (NAR) and the Public Library of Science Genetics (PLoSGen).   Publication of both these two studies was supported by the Brandeis University LTS Open Access Fund for Scholarly Communications.

In this scientific journey, we made use of important collaborations with labs from across the globe. The NAR study used openly shared genomics data from the United Kingdom (Casey Bergman’s lab) and Germany (Björn Brembs’ lab). The PlosGen study relied on contributions from Austria (Daniel Gerlach), Australia (Benjamin Kile’s lab), Nebraska (Mayumi Naramura’s lab), and next door neighbors (Bonnie Berger’s lab at MIT).

In the NAR study, Lau lab postdoctoral fellow Reazur Rahman and the Lau team developed a program called TIDAL (Transposon Insertion and Depletion AnaLyzer) that scoured over 360 fly genome sequences publicly accessible in the SRA portal. We discovered that transposons, also known as jumping genetic parasites, formed different genome patterns in every fly strain. There are many thousands of transposons throughout the fly genome. The vast majority of these transposons share a virus origin, being retrotransposons. Even though most of these transposons are located in the intergenic and heterochromatic regions of the fly genome, with on average more than two transposons per fly gene, it is a straightforward assumption that some of them are bound to influence gene expression in one way or another.

We discovered that common fly strains with the same name but living in different laboratories turn out to have very different patterns of transposons. This is surprising because many geneticists have assumed that the so-called Canton-S or Oregon-R strains are all similar and thus used as a common wild-type reference. In particular, we were able to differentiate two strains which had only been separated very recently from each other, indicating rapid evolution of these transposon landscapes.

Our results lend some mechanistic insight to behavioral data from the Brembs lab which had shown that these sub-strains of the standard Canton-S reference stock can behave very differently in some experiments. We hypothesize that these differences in transposon landscapes and the behavioral differences may reflect unanticipated changes in fly stocks, which are typically assumed to remain stable under laboratory culture conditions. If even recently separated fly stocks can be differentiated both on the genetic and on the behavioral level, perhaps this is an indication that we are beginning to discover mechanisms rendering animals much more dynamic and malleable than we usually give them credit for. Such insights should not only convince geneticists to think twice and be extra careful with their common reference stocks, it may also provide food for thought for evolutionary biologists.  In addition, we hope to utilize the TIDAL tool to study how expanding transposon patterns might alter genomes in aging fly brains, which may then explain human brain changes during aging.

Screenshot of the TIDAL-Fly website:

tidal

Given the number of potentially harmful mobile genetic elements in a genome, it is not surprising that counter-measures have evolved to limit the detrimental effect of these transposons. So-called Piwi-interacting RNAs (piRNA) are a class of highly conserved, small, noncoding RNAs associated with repressing transposon gene expression, in particular in the germline. In the PLoSGen study, visiting scientist Gung-wei Chirn and the Lau lab developed a program that discovered expression patterns of piRNA genes in a group of mammalian datasets extracted from the GEO portal. Coupling these datasets with other small RNA datasets created in the Lau lab, the team discovered a remarkable diversity of these RNA loci for each species, suggesting a high rate of diversification of piRNA expression over time. The rate of diversification in piRNA expression patterns appeared to be much faster than in that changes of testis-specific gene expression patterns amongst different animals.

It has been known for a while that there is an ongoing evolutionary arms race between transposon intruders and the anti-transposon police, the piRNAs. In mammals, however, the piRNAs appear to diversify according to two different strategies. Most of the piRNA genomic loci discovered in humans were quite distinct from those in other primates like the macaque monkey or the marmoset and seemed to evolve just as quickly as, e.g. Drosophila piRNA genes. On the other hand, a separate, smaller set of these genomic loci have conserved their piRNA expression patterns, extending across humans, through primates, to rodents, and even to dogs, horses and pigs.

These conserved piRNA expression patterns span nearly 100 million years of evolution, suggesting an important function either in regulating a transposon that is common among most if not all eutherian mammals, or in regulating the expression of another, conserved gene.

To find the answer, the Lau lab studied the target sequences of different conserved piRNAs. One of them was indeed a conserved gene in eutherian mammals, albeit not one of a transposon, but of an endogenous gene. In fact, most of the conserved piRNA genes were depleted of transposon-related sequences. A second approach to test the function of conserved piRNAs was to analyze two existing mouse mutations in two piRNA loci. The results showed that the mutations indeed affected the generation of the piRNAs, and these mice were less fertile because their sperm count was reduced. Future work will explore how infertility diseases may be linked to these specific piRNA loci. It also remains to be understood how a gene family originally evolved as transposon police could evolve into a mechanism regulating endogenous genes.

In summary, this work is an example of how open data enables and facilitates novel insights into fundamental biological processes. In this case, these insights have taught us that genomes are much more dynamic and diverse than we have previously thought, with repercussions not only for the utility any single reference genome can have for research, but also for the role of sequencing individual genomes in personalized medicine.


Rahman R, Chirn GW, Kanodia A, Sytnikova YA, Brembs B, Bergman CM, & Lau NC (2015). Unique transposon landscapes are pervasive across Drosophila melanogaster genomes. Nucleic acids research PMID: 26578579, DOI: 10.1093/nar/gkv1193
Chirn, G., Rahman, R., Sytnikova, Y., Matts, J., Zeng, M., Gerlach, D., Yu, M., Berger, B., Naramura, M., Kile, B., & Lau, N. (2015). Conserved piRNA Expression from a Distinct Set of piRNA Cluster Loci in Eutherian Mammals PLOS Genetics, 11 (11) DOI: 10.1371/journal.pgen.1005652

Like this:

Like Loading...
Posted on November 25, 2015 at 10:41 8 Comments
Nov19

Guest post: Why our Open Data project worked

In: science politics • Tags: Drosophila, open data, open science

Why our Open Data project worked,
(and how Decorum can allay our fears of Open Data).

I am honored to Guest Post on Björn’s blog and excited about  the interest in our work from Björn’s response to Dorothy Bishop’s first post. As corresponding author on our paper, I will provide more context to our successful Open Data experience with Björn’s and Casey’s labs.  I will also comment on why authorship is an important component to a decorum that our scientific society needs to set to make the Open Data idea work (an issue raised on Dorothy Bishop’s post).

It was my idea to credit Björn and Casey with authorship after Casey explained to me that they had not yet completed their own analyses of these genomes. Casey suggested we respect the decorum previously set by genome sequencing centers providing early release of their sequencing data: the genome centers reserve a courtesy to be the first to publish on the data. Trained as a genomicists, I was aware of this decorum, hence my offer to collaborate with B&C as authors. I viewed their data as highly valuable as a precious antibody or mutant organism, which if not yet published, is a major contribution that the original creators should receive credit for providing.

Open Data Sharing worked because an honor code existed between all of us scientists. It is not because I don’t have tenure yet do I choose to respect B&C’s trust in the Open Data idea. I could have easily been an A**hole and published our analyses without crediting B&C with authorship. I already downloaded the data, and our work was well underway.  In addition, Björn offered to only be acknowledged without authorship.  However, I believe that scenario would add fodder to the fear of Open Data Sharing, and good will, such as  authorship, is what we sorely need in our hyper-competitive enterprise. Finally, I believe good will encouraged B&C to provide us further crucial insight that greatly improved our paper.

Our Open Data effort is also a counter-example to the fear that Open Data will expose our mistakes.  Our study examined many  other Drosophila genomes sequenced by other labs besides B&C’s data.  We shared our findings with other labs before writing our manuscript, enabling one lab to tweak their study in a better direction. This lab thanked us personally for our keen eye; and noted that our help with re-examining data repositories, an often thankless effort, turned out to be critical for them and they commended us for doing so.  Thus,  the “win-win” of Open Data Sharing can be truly far-reaching.

That being said, the Open Data Sharing movement needs to develop decorum, akin to laws and culture values providing decorum to make Capitalism and Open Society work. For example, absence of decorum allows a good thing like Capitalism to be perverted (i.e. worker abuse, cartels, insider trading, Donald Trump).

With Open Data, the lack of decorum can lead to misunderstanding and animosity in our scientific society. Take the Twitter firestorm of the YG-MS-ENCODE controversy. A young scientist could see this as another cautionary tale against Open Data Sharing.

I am unaware if our scientific society has a decorum yet for best practices with Open Data, and it is a worthy debate on whether Twitter or private email is the best way to communicate early Open Data analyses. With decorum on Open Data Sharing  in place, I believe we can reduce the antagonism and paranoia that shrouds our current high-stakes scientific climate like how greenhouse gases shroud our planet. Unchecked, we will doom ourselves and our fields of study.

In closing, our experience shows how Open Data Sharing is a tremendous concept all scientists should embrace and promote against the naysayers. I am looking forward to more Open Data inspired research coming from my lab in the future.

Nelson Lau, Ph.D.

nlau@brandeis.edu
Assistant Professor – Biology
Brandeis University
415 South St, MS029
Science Receiving
Waltham MA 02454, USA
Ph: 781-736-2445
https://www.bio.brandeis.edu/laulab

Like this:

Like Loading...
Posted on November 19, 2015 at 10:53 15 Comments
Nov16

Don’t be afraid of open data

In: science politics • Tags: Drosophila, genomics, open data

This is a response to Dorothy Bishop’s post “Who’s afraid of open data?“.

After we had published a paper on how Drosophila strains that are referred to by the same name in the literature (Canton S), but came from different laboratories behaved completely different in a particular behavioral experiment, Casey Bergman from Manchester contacted me, asking if we shouldn’t sequence the genomes of these five fly strains to find out how they differ. So I went and behaviorally tested each of the strains again, extracted the DNA from the 100 individuals I had just tested and sent the material to him. I also published the behavioral data immediately on our GitHub project page.

Casey then sequenced the strains and made the sequences available, as well. A few weeks later, both Casey and I were contacted by Nelson Lau at Brandeis, showing us his bioinformatics analyses of our genome data. Importantly, his analyses wasn’t even close to what we had planned. On the contrary, he had looked at something I (not being a bioinformatician) would have considered orthogonal (Casey may disagree). So there we had a large chunk of work we would have never done on the data we hadn’t even started analyzing, yet. I was so thrilled! I learned so much from Nelson’s work, this was fantastic! Nelson even asked us to be co-authors, to which I quickly protested and suggested, if anything, I might be mentioned in the acknowledgments for “technical assistance” – after all, I had only extracted the DNA.

However, after some back-and-forth, he persuaded me with the argument that he wanted to have us as co-authors to set an example. He wanted to show everyone that sharing data is something that can bring you direct rewards in publications. He wanted us to be co-authors as a reward for posting our data and as incentive for others to let go of their fears and also post their data online. I’m still not quite sure if this fits the COPE guidelines to the point, but for now I’m willing to take the risk and see what happens.

Nelson is on the tenure clock and so the position of each of his paper’s in the journal hierarchy matters. The work is now online at Nucleic Acids Research and both Casey and I are co-authors. The paper was published before Casey has even gotten around to start his own analyses of our data. This is how science ought to proceed! Now we just need ways to credit such re-use of research data in a currency that’s actually worth something and doesn’t entail making people ‘authors’ on publications where they’ve had little intellectual input. A modern infrastructure would take care of that…

Until we have such an infrastructure, I hope this story will make others share their data and code as well.

Like this:

Like Loading...
Posted on November 16, 2015 at 12:31 94 Comments
Nov12

Chance in animate nature, day 3

In: science • Tags: chance, free will, nonlinearity

On our final day (day 1, day 2), I was only able to hear Boris Kotchoubey‘s (author of “why are you free?“) talk, as I had to leave early to catch my flight. He made a great effort to slowly introduce us to nonlinear dynamics and the consequences it has for the predictive power of science in general.

Applied to human movement in particular, he showed that nervous systems take advantage of the biophysics of bodily motion to only add the component to movement, that biophysics (think your leg swinging while you walk) doesn’t already take care of. This is an important and all too often forgotten insight that I recognize from the work of the laboratory of Hillel Chiel in Aplysia biting behavior. He explained work studying hammer blows, where the trajectory of all arm joints did not seem to follow any common rules – the only commonality that could be found between individual hammer blows was the trajectory of the hammer’s head. This is reminiscent of the distinction between world- and self-learning in flies, where the animals can use any behavior very flexibly to accomplish an external goal, until they have performed the behavior often enough to become habitual, at which time this flexibility is (partially?) lost.

Halfway through the talk, he arrived at the uncontrolled manifold hypothesis, where the nervous system isn’t trying to eliminate noise, but to use it to its advantage for movement control. Not entirely unexpectedly, he went from this to chemotaxis in Escherichia coli as an example of a process which also takes advantage of chance.

He differentiates between two different kinds of unpredictable systems: a) highly complex and incomputable systems b) unique unrepeatable systems. The differences between these two systems breakdown as soon as the uncertainly principle is an actual property of the universe that poses absolute, non-circumventable limits on the potential knowledge anyone can have about these systems.

Like this:

Like Loading...
Posted on November 12, 2015 at 10:22 4 Comments
  • Page 11 of 22
  • « First
  • «
  • 9
  • 10
  • 11
  • 12
  • 13
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,085 views)
  • Sci-Hub as necessary, effective civil disobedience (23,039 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,920 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

The Drosophila Flight Simulator 2.0
The Drosophila Flight Simulator 2.0

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d