bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 111 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 409 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 680 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1601 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1556 downloads 0.00 KB
Download
Dec01

Why cutting down on peer-review will improve it

In: science politics • Tags: grants, peer-review

Update, Dec. 4, 2015: With the online discussion moving towards grantsmanship and the decision of what level of expertise to expect from a reviewer, I have written down some thoughts on this angle of the discussion.

With more and more evaluations, assessments and quality control, the peer-review burden has skyrocketed in recent years. Depending on field and tradition, we write reviews on manuscripts, grant proposals, Bachelor-, Masters- and PhD-theses, students, professors, departments or entire universities. Top reviewers at Publons clock in at between 0.5-2 reviews for every day of the year. It is conceivable that with such a frequency, reviews cannot be very thorough, or the material to be reviewed is comparatively less complex or deep. But already at a much lower frequency, time constraints imposed by increasing reviewer load make thorough reviews of complex material difficult. Hyper-competitive funding situations add incentives to summarily dismiss work perceived as infringing on one’s own research. It is hence not surprising that such conditions bring out the worst in otherwise well-meaning scientists.

Take for instance a recent grant proposal of mine, based on our recent paper on FoxP in operant self-learning. While one of the reviewers provided reasonable feedback, the other raised issues that can be shown to either be demonstrably baseless or already included in the application. I will try to show below how this reviewer, who obviously has some vague knowledge of the field in general, but not nearly enough expertise to review our proposal, should have either declined to review or at least invested some time reading the relevant literature as well as the proposal in more depth.

The reviewer writes (full review text posted on thinklab):

In flies, the only ortholog [sic] FoxP has been recently analyzed in several studies. In a report by the Miesenböck lab published last year in Science, a transposon induced mutant affecting one of the two (or three) isoforms of the FoxP gene was used to show a requirement of FoxP in decision making processes.

Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP. For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In principle, this proposal addresses important and highly relevant questions but unfortunately there are many (!) problems with this application which make it rather weak and in no case fundable.

Unfortunately, there are many problems with this review which make it rather weak and in no case worthy of consideration for a revised version of the proposal.

The preliminary work mentioned in this proposal is odd. Basically we learn that there are many RNAi lines available in the stock centers, which have a phenotype when used to silence FoxP expression but strangely do not affect FoxP expression. What does this mean?

Had Reviewer #1 been an expert in the field, they would have been aware of the RNAi issues concerning template mismatch and the selection of targeted mRNA for sequestration and degradation, respectively. For the non-expert, we explain this issue with further references in our own FoxP paper and in more detail in a related blog post.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

I have seen no arguments why the generation of additional RNAi strains is now all the sudden expected to yield a breakthrough result.

Had Reviewer #1 been an expert in the field, they would be aware that the lines we tested were generated as part of large-scale efforts to manipulate every gene in the Drosophila genome. As such, the constructs were generated against the reference genome, which of course does not precisely match every potential strain used in every laboratory, as any expert in the field is very well aware of (explained in more detail in this blog post). Consequently, RNAi constructs directed at the specific strain used for genetic manipulation and subsequent crossing of all driver lines into this genetic background (as is the well-established technique in the collaborating Schneuwly laboratory), reliably yields constructs that lead to mRNA degradation, rather than sequestration. This discussion leaves out the known tendency of the available strains for off-target effects, compounding their problems. Dedicated RNAi constructs, such as the ones I propose to use, can be tested against off-targets beforehand.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Quite similar we learn in the preliminary result section that many attempts to generate specific antibodies failed and yet the generation of mAbs is proposed. Again, it is unclear what we will learn and alternative strategies are not even discussed.

Had Reviewer #1 been an expert in the field, they would understand the differences between polyclonal and monoclonal antibodies, in particular as the antibody technology is currently particularly hotly debated in rather prominent locations.

These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

The authors could consider the generation of HA-tagged Fosmids /I minigenes or could use homologous recombination to manipulate the gene locus accordingly.

Had Reviewer #1 not overlooked our Fig. 5., as well as our citations of Vilain et al. as well as Zhang et al., it may not have gone unnoticed that this type of genome editing is precisely what we propose to do.

One page 2 of the application it is stated that “It is a technical hurdle for further mechanistic study of operant self-learning that the currently available FoxP mutant lines are insertion lines, which only affect the expression level of some of the isoforms. ” This is not true! and the applicant himself states on page 11: “However, as the Mi{MIC} insertion is contained within a coding exon which is spliced into all FoxP isoforms, it is likely that this insertion alone already leads to a null mutation at the FoxP locus.” Yes, by all means the insertion of a large transposon into the open reading frame of a gene causes a mutation!!!! Why this allele, which is available in the stock centers, has not yet been analyzed so far remains mysterious.

Had Reviewer #1 actually engaged with our proposal, this would remain a mystery to them no longer: the analysis of this strain is part of our proposal. If it had been possible to analyze this strain without this proposal, the proposal would not have been written. Had Reviewer #1 ever written a research proposal of their own, they would understand that proposals are written to fund experiments that have not yet been performed. Hence, Reviewer #1 is indeed part of the answer: without their unqualified dismissal of our proposal, we would already be closer to analyzing this strain.

Moreover, reading the entire third section of this application “genome editing using MiMIC” reveals that the applicant has not understood the rational behind the MiMIC technique at all. Venken et al clearly published that “Insertions (of the Minos-based MiMIC transposon) in coding introns can be exchanged with protein-tag cassettes to create fusion proteins to follow protein expression and perform biochemical experiments.” Importantly, insertions have to be in an intron!!!! The entire paragraph demonstrates the careless generation of this application. “we will characterize the expression of eGFP in the MiMIC transposen”. Again, a short look into the Venken et aI., paper demonstrates the uselessness of this approach.

Reading this entire paragraph reveals that Reviewer #1 has neither noticed Fig. 5 in the proposal, nor understood that we do not follow Venken et al. in our proposal (which is the reason we do not even cite Venken et al.), but Vilain et al. and Zhang et al. Precisely because the methods explained in Venken et al. do not work in our case, we will follow Vilain et al. and Zhang et al., where this is not an issue. Venken et al. are not cited in the proposal, as we expect the reviewers to be expert peers. Discussing and explaining such issues at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

Moreover, just a few weeks ago, at the RMCE session of a meeting, I attended a presentation of the senior author of Zhang et al., Frank Schnorrer, where he essentially explained their method, which I proposed (see Fig. 5 in the proposal). He later confirmed that there are no problems with using their RMCE approach for the specific case of the FoxP gene with the insertion in an exon. Hence, the presentation of Dr. Schnorrer as well as my later discussion with him confirmed the suspicion that Reviewer #1 lacks not only the expertise in the current methods, but also failed to notice the alternative methods by Zhang et al. and Vilain et al. even though we cite these publications and provide an entire figure detailing the approach on top of the citation and explanations in the text.

Finally, had Reviewer #1 been an expert in the field, they would be aware that the laboratory of Hugo Bellen is currently generating intron-based MiMIC lines for all those lines where the MiMIC cassette happened to insert elsewhere. Our statement in the proposal comes to mind in this respect: “In fact, by the time this project will commence, there will likely be a genome editing method published, which is even more effective and efficient than the ones cited above. In this case, we will of course use that method.”

The application requests two students. Although the entire application is far from being fundable, this request adds the dot on the i. The student is planned for the characterization of lines that are not available, characterization of antibodies that likely will not be on hand in the next two years and so on. In summary, this is a not well prepared application, full of mistakes and lacking some necessary preliminary data.

Had Reviewer #1 been an expert in the field, they would know that performing the kind of behavioral experiments we propose requires training and practice – time which is not required for applying established transgenic techniques. Thus, there is already a time lag between generating lines and testing them, inherent to the more time-intensive training required for behavioral experiments. This time lag can be supported and extended by hiring one student first and the second somewhat later.

In addition, as emphasized by Reviewer #1 themselves (and outlined in our proposal), there are still lines available that have not been thoroughly characterized, yet, such that any missing lag can easily be filled with characterizing these strains. If any of the available strains show useful characteristics, the corresponding new lines do not have to be generated. Moreover, many of the available transgenic lines also need to be characterized on the anatomical level as well (also outlined in the proposal).

Finally, by the time this project can commence, given the projects in the other groups working on FoxP, there will likely be yet new lines, generated elsewhere, that also warrant behavioral and/or anatomical characterization. Thus, the situation remains as described in the proposal: two students with complementary interests and training are required for our proposal and a small initial lag between the students is perfectly sufficient to accommodate both project areas.

In this way, one can expect at least one year in which the first student can start generating new lines at a time when the second student either has not started yet, is training or is testing lines that already exist.

These issues are only briefly discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

In summary, I could not find any issue raised in this review that is not either generally known in the field, or covered either in the literature, or in our proposal. Hence, I regret to conclude that there is not a single issue raised by Reviewer #1 that I would be able to address in my revised proposal. The proposal may not be without its flaws and the other reviewer was able to contribute some valuable suggestions, so I’ve put it out on thinklab for everyone to compare it to the review and contribute meaningful and helpful criticism. Unqualified dismissal of the type shown above only unnecessarily delays science and may derail the careers of the students who hoped to be working on this project.

If we all had less material to review, perhaps also Reviewer #1 above would take the time and read the literature as well as the proposal, before writing their review. But perhaps I have it all wrong and Reviewer #1 was right to dismiss the proposal like they did? If so, you are now in a position to let me know as both the proposal and the review are open and comments are invited. Perhaps making all peer-review this open can help reduce the incidence of such reviews, even if the amount of reviewing cannot be significantly reduced?

Like this:

Like Loading...
Posted on December 1, 2015 at 18:13 56 Comments
Nov25

Data Diving for Genomics Treasure

In: own data • Tags: Drosophila, evolution, open data, transposons

This is a post written jointly by Nelson Lau from Brandeis and me, Björn Brembs. In contrast to Nelson’s guest post, which focused on the open data aspect of our collaboration, this one describes the science behind our paper and a second one by Nelson, which just appeared in PLoS Genetics.

ResearchBlogging.orgLaboratories around the world are generating a tsunami of deep-sequencing data from nearly every organism, past and present. These sequencing data range from entire genomes to segments of chromatin to RNA transcripts. To explore this ocean of “BIG DATA”, one has to navigate through portals of the National Computational Biotechnology Institute’s (NCBI’s) two signature repositories, the Sequencing Read Archive (SRA) and the Gene Expression Omnibus (GEO). With the right bioinformatics tools, scientists can explore and discover freely-available data that can lead to valuable new biological insights.

Nelson Lau’s lab in the Department of Biology at Brandeis has recently completed two such successful voyages into the realm of genomics data mining, with studies published in the Open Access journals of Nucleic Acids Research (NAR) and the Public Library of Science Genetics (PLoSGen).   Publication of both these two studies was supported by the Brandeis University LTS Open Access Fund for Scholarly Communications.

In this scientific journey, we made use of important collaborations with labs from across the globe. The NAR study used openly shared genomics data from the United Kingdom (Casey Bergman’s lab) and Germany (Björn Brembs’ lab). The PlosGen study relied on contributions from Austria (Daniel Gerlach), Australia (Benjamin Kile’s lab), Nebraska (Mayumi Naramura’s lab), and next door neighbors (Bonnie Berger’s lab at MIT).

In the NAR study, Lau lab postdoctoral fellow Reazur Rahman and the Lau team developed a program called TIDAL (Transposon Insertion and Depletion AnaLyzer) that scoured over 360 fly genome sequences publicly accessible in the SRA portal. We discovered that transposons, also known as jumping genetic parasites, formed different genome patterns in every fly strain. There are many thousands of transposons throughout the fly genome. The vast majority of these transposons share a virus origin, being retrotransposons. Even though most of these transposons are located in the intergenic and heterochromatic regions of the fly genome, with on average more than two transposons per fly gene, it is a straightforward assumption that some of them are bound to influence gene expression in one way or another.

We discovered that common fly strains with the same name but living in different laboratories turn out to have very different patterns of transposons. This is surprising because many geneticists have assumed that the so-called Canton-S or Oregon-R strains are all similar and thus used as a common wild-type reference. In particular, we were able to differentiate two strains which had only been separated very recently from each other, indicating rapid evolution of these transposon landscapes.

Our results lend some mechanistic insight to behavioral data from the Brembs lab which had shown that these sub-strains of the standard Canton-S reference stock can behave very differently in some experiments. We hypothesize that these differences in transposon landscapes and the behavioral differences may reflect unanticipated changes in fly stocks, which are typically assumed to remain stable under laboratory culture conditions. If even recently separated fly stocks can be differentiated both on the genetic and on the behavioral level, perhaps this is an indication that we are beginning to discover mechanisms rendering animals much more dynamic and malleable than we usually give them credit for. Such insights should not only convince geneticists to think twice and be extra careful with their common reference stocks, it may also provide food for thought for evolutionary biologists.  In addition, we hope to utilize the TIDAL tool to study how expanding transposon patterns might alter genomes in aging fly brains, which may then explain human brain changes during aging.

Screenshot of the TIDAL-Fly website:

tidal

Given the number of potentially harmful mobile genetic elements in a genome, it is not surprising that counter-measures have evolved to limit the detrimental effect of these transposons. So-called Piwi-interacting RNAs (piRNA) are a class of highly conserved, small, noncoding RNAs associated with repressing transposon gene expression, in particular in the germline. In the PLoSGen study, visiting scientist Gung-wei Chirn and the Lau lab developed a program that discovered expression patterns of piRNA genes in a group of mammalian datasets extracted from the GEO portal. Coupling these datasets with other small RNA datasets created in the Lau lab, the team discovered a remarkable diversity of these RNA loci for each species, suggesting a high rate of diversification of piRNA expression over time. The rate of diversification in piRNA expression patterns appeared to be much faster than in that changes of testis-specific gene expression patterns amongst different animals.

It has been known for a while that there is an ongoing evolutionary arms race between transposon intruders and the anti-transposon police, the piRNAs. In mammals, however, the piRNAs appear to diversify according to two different strategies. Most of the piRNA genomic loci discovered in humans were quite distinct from those in other primates like the macaque monkey or the marmoset and seemed to evolve just as quickly as, e.g. Drosophila piRNA genes. On the other hand, a separate, smaller set of these genomic loci have conserved their piRNA expression patterns, extending across humans, through primates, to rodents, and even to dogs, horses and pigs.

These conserved piRNA expression patterns span nearly 100 million years of evolution, suggesting an important function either in regulating a transposon that is common among most if not all eutherian mammals, or in regulating the expression of another, conserved gene.

To find the answer, the Lau lab studied the target sequences of different conserved piRNAs. One of them was indeed a conserved gene in eutherian mammals, albeit not one of a transposon, but of an endogenous gene. In fact, most of the conserved piRNA genes were depleted of transposon-related sequences. A second approach to test the function of conserved piRNAs was to analyze two existing mouse mutations in two piRNA loci. The results showed that the mutations indeed affected the generation of the piRNAs, and these mice were less fertile because their sperm count was reduced. Future work will explore how infertility diseases may be linked to these specific piRNA loci. It also remains to be understood how a gene family originally evolved as transposon police could evolve into a mechanism regulating endogenous genes.

In summary, this work is an example of how open data enables and facilitates novel insights into fundamental biological processes. In this case, these insights have taught us that genomes are much more dynamic and diverse than we have previously thought, with repercussions not only for the utility any single reference genome can have for research, but also for the role of sequencing individual genomes in personalized medicine.


Rahman R, Chirn GW, Kanodia A, Sytnikova YA, Brembs B, Bergman CM, & Lau NC (2015). Unique transposon landscapes are pervasive across Drosophila melanogaster genomes. Nucleic acids research PMID: 26578579, DOI: 10.1093/nar/gkv1193
Chirn, G., Rahman, R., Sytnikova, Y., Matts, J., Zeng, M., Gerlach, D., Yu, M., Berger, B., Naramura, M., Kile, B., & Lau, N. (2015). Conserved piRNA Expression from a Distinct Set of piRNA Cluster Loci in Eutherian Mammals PLOS Genetics, 11 (11) DOI: 10.1371/journal.pgen.1005652

Like this:

Like Loading...
Posted on November 25, 2015 at 10:41 8 Comments
Nov19

Guest post: Why our Open Data project worked

In: science politics • Tags: Drosophila, open data, open science

Why our Open Data project worked,
(and how Decorum can allay our fears of Open Data).

I am honored to Guest Post on Björn’s blog and excited about  the interest in our work from Björn’s response to Dorothy Bishop’s first post. As corresponding author on our paper, I will provide more context to our successful Open Data experience with Björn’s and Casey’s labs.  I will also comment on why authorship is an important component to a decorum that our scientific society needs to set to make the Open Data idea work (an issue raised on Dorothy Bishop’s post).

It was my idea to credit Björn and Casey with authorship after Casey explained to me that they had not yet completed their own analyses of these genomes. Casey suggested we respect the decorum previously set by genome sequencing centers providing early release of their sequencing data: the genome centers reserve a courtesy to be the first to publish on the data. Trained as a genomicists, I was aware of this decorum, hence my offer to collaborate with B&C as authors. I viewed their data as highly valuable as a precious antibody or mutant organism, which if not yet published, is a major contribution that the original creators should receive credit for providing.

Open Data Sharing worked because an honor code existed between all of us scientists. It is not because I don’t have tenure yet do I choose to respect B&C’s trust in the Open Data idea. I could have easily been an A**hole and published our analyses without crediting B&C with authorship. I already downloaded the data, and our work was well underway.  In addition, Björn offered to only be acknowledged without authorship.  However, I believe that scenario would add fodder to the fear of Open Data Sharing, and good will, such as  authorship, is what we sorely need in our hyper-competitive enterprise. Finally, I believe good will encouraged B&C to provide us further crucial insight that greatly improved our paper.

Our Open Data effort is also a counter-example to the fear that Open Data will expose our mistakes.  Our study examined many  other Drosophila genomes sequenced by other labs besides B&C’s data.  We shared our findings with other labs before writing our manuscript, enabling one lab to tweak their study in a better direction. This lab thanked us personally for our keen eye; and noted that our help with re-examining data repositories, an often thankless effort, turned out to be critical for them and they commended us for doing so.  Thus,  the “win-win” of Open Data Sharing can be truly far-reaching.

That being said, the Open Data Sharing movement needs to develop decorum, akin to laws and culture values providing decorum to make Capitalism and Open Society work. For example, absence of decorum allows a good thing like Capitalism to be perverted (i.e. worker abuse, cartels, insider trading, Donald Trump).

With Open Data, the lack of decorum can lead to misunderstanding and animosity in our scientific society. Take the Twitter firestorm of the YG-MS-ENCODE controversy. A young scientist could see this as another cautionary tale against Open Data Sharing.

I am unaware if our scientific society has a decorum yet for best practices with Open Data, and it is a worthy debate on whether Twitter or private email is the best way to communicate early Open Data analyses. With decorum on Open Data Sharing  in place, I believe we can reduce the antagonism and paranoia that shrouds our current high-stakes scientific climate like how greenhouse gases shroud our planet. Unchecked, we will doom ourselves and our fields of study.

In closing, our experience shows how Open Data Sharing is a tremendous concept all scientists should embrace and promote against the naysayers. I am looking forward to more Open Data inspired research coming from my lab in the future.

Nelson Lau, Ph.D.

nlau@brandeis.edu
Assistant Professor – Biology
Brandeis University
415 South St, MS029
Science Receiving
Waltham MA 02454, USA
Ph: 781-736-2445
https://www.bio.brandeis.edu/laulab

Like this:

Like Loading...
Posted on November 19, 2015 at 10:53 15 Comments
Nov16

Don’t be afraid of open data

In: science politics • Tags: Drosophila, genomics, open data

This is a response to Dorothy Bishop’s post “Who’s afraid of open data?“.

After we had published a paper on how Drosophila strains that are referred to by the same name in the literature (Canton S), but came from different laboratories behaved completely different in a particular behavioral experiment, Casey Bergman from Manchester contacted me, asking if we shouldn’t sequence the genomes of these five fly strains to find out how they differ. So I went and behaviorally tested each of the strains again, extracted the DNA from the 100 individuals I had just tested and sent the material to him. I also published the behavioral data immediately on our GitHub project page.

Casey then sequenced the strains and made the sequences available, as well. A few weeks later, both Casey and I were contacted by Nelson Lau at Brandeis, showing us his bioinformatics analyses of our genome data. Importantly, his analyses wasn’t even close to what we had planned. On the contrary, he had looked at something I (not being a bioinformatician) would have considered orthogonal (Casey may disagree). So there we had a large chunk of work we would have never done on the data we hadn’t even started analyzing, yet. I was so thrilled! I learned so much from Nelson’s work, this was fantastic! Nelson even asked us to be co-authors, to which I quickly protested and suggested, if anything, I might be mentioned in the acknowledgments for “technical assistance” – after all, I had only extracted the DNA.

However, after some back-and-forth, he persuaded me with the argument that he wanted to have us as co-authors to set an example. He wanted to show everyone that sharing data is something that can bring you direct rewards in publications. He wanted us to be co-authors as a reward for posting our data and as incentive for others to let go of their fears and also post their data online. I’m still not quite sure if this fits the COPE guidelines to the point, but for now I’m willing to take the risk and see what happens.

Nelson is on the tenure clock and so the position of each of his paper’s in the journal hierarchy matters. The work is now online at Nucleic Acids Research and both Casey and I are co-authors. The paper was published before Casey has even gotten around to start his own analyses of our data. This is how science ought to proceed! Now we just need ways to credit such re-use of research data in a currency that’s actually worth something and doesn’t entail making people ‘authors’ on publications where they’ve had little intellectual input. A modern infrastructure would take care of that…

Until we have such an infrastructure, I hope this story will make others share their data and code as well.

Like this:

Like Loading...
Posted on November 16, 2015 at 12:31 94 Comments
Nov12

Chance in animate nature, day 3

In: science • Tags: chance, free will, nonlinearity

On our final day (day 1, day 2), I was only able to hear Boris Kotchoubey‘s (author of “why are you free?“) talk, as I had to leave early to catch my flight. He made a great effort to slowly introduce us to nonlinear dynamics and the consequences it has for the predictive power of science in general.

Applied to human movement in particular, he showed that nervous systems take advantage of the biophysics of bodily motion to only add the component to movement, that biophysics (think your leg swinging while you walk) doesn’t already take care of. This is an important and all too often forgotten insight that I recognize from the work of the laboratory of Hillel Chiel in Aplysia biting behavior. He explained work studying hammer blows, where the trajectory of all arm joints did not seem to follow any common rules – the only commonality that could be found between individual hammer blows was the trajectory of the hammer’s head. This is reminiscent of the distinction between world- and self-learning in flies, where the animals can use any behavior very flexibly to accomplish an external goal, until they have performed the behavior often enough to become habitual, at which time this flexibility is (partially?) lost.

Halfway through the talk, he arrived at the uncontrolled manifold hypothesis, where the nervous system isn’t trying to eliminate noise, but to use it to its advantage for movement control. Not entirely unexpectedly, he went from this to chemotaxis in Escherichia coli as an example of a process which also takes advantage of chance.

He differentiates between two different kinds of unpredictable systems: a) highly complex and incomputable systems b) unique unrepeatable systems. The differences between these two systems breakdown as soon as the uncertainly principle is an actual property of the universe that poses absolute, non-circumventable limits on the potential knowledge anyone can have about these systems.

Like this:

Like Loading...
Posted on November 12, 2015 at 10:22 4 Comments
Nov11

Chance in animate nature, day 2

In: science • Tags: chance, Drosophila, free will

While the first day (day 2, day 3) was dominated by philosophy, mathematics and other abstract discussions of chance, this day of our symposium started with a distinct biological focus.

Martin Heisenberg, Chance in brain and behavior

First speaker for this second day on the symposium on the role of chance in the living world was my thesis supervisor and mentor, Martin Heisenberg. Even if he hadn’t a massive body of his own work to contribute to this topic, just being the youngest son of Werner Heisenberg of uncertainty principle fame, made his presence interesting already from a science history perspective. In his talk, he showed many examples from the fruit fly Drosophila, which showed how the fly spontaneously chooses between different options, both in terms of behavior and in terms of visual attention. Central to his talk was the concept of outcome expectations in the organization of adaptive behavioral choice. Much of this work is published and can be easily found, so I won’t go into detail here.

Then came my talk where I presented a somewhat adjusted version of my talk on the organization of behavior, where I provide evidence how even invertebrate brains generate autonomy and liberate themselves from the environment:

Friedel Reischies, Limited Indeterminism – amplification of physically stochastic events

Third speaker this morning was Friedel Reischies, psychiatrist from Berlin. After introducing some general aspects of brain function, he discussed various aspects of the control of behavioral variability. He also talked about the concept of self and how we attribute agency to our actions, citing D. Wegner. Referring to individual psychiatric cases he talked about different aspects of freedom and how these cases differentially impinge on these aspects. Central theme of his talk was the variability of nervous systems / behavior and its control.

The discussion session after these first three talks circulated quite productively around intentionality, decision-making, free will and the concept of self.

Wolfgang Lenzen: Does freedom need chance?

The third speaker for this day was a philosopher, Wolfgang Lenzen. As it behooves a philosopher, he started out with an attempt to define the terms chance, possibility, necessity and contingency, as well as some of their variants. Here again, as yesterday, the principle of sufficient reason reared its head again. He then went back to Cicero and Augustine to exemplify the problem of free will with respect to determinism and causality. Later the determinist Hume was cited as the first compatibilist, allowing for an exception to determinism in the context of the will. Lenzen then described Schopenhauer as a determinist. Given the dominance of classical Newtonian mechanics, the determinism of the philosophers at the time are not surprising. The now dominant insights from relativity and quantum mechanics had a clear effect on the more recent philosophers. Lenzen then cited Schlick who predictably argued with the false dichotomy of our behavior either being determined or entirely random. Other contemporary determinist scholars cited were Roth and Prinz. In his (as I see it compatibilist) reply, he emphasized that free will is not dependent on the question of whether the world is deterministic. He also defined free will as something only adult humans have, that it requires empathy and theory of mind. In his view, animals do not possess free will as they do not reflect their actions. Hence, animals cannot be held responsible. Similar to other scholars, he listed three criteria for an action to be ‘free’: the person willed the action, the will is the cause of the action and the person could have acted otherwise.

Lenzen went on to disavow dualism: “there are no immaterial substances”. This implies that the soul or the mind as a complex mental/psychological human property is intimately, necessarily coupled to a healthy, material brain. It also implies that “mental causation” does not mean that immaterial mind interacts with a material brain. Mental causation can only be thought as an idea or thought being a neuronal activity which in principle or in actuality can move muscles.

Towards the end, Lenzen picked up the different variants of possibilities from his introduction to apply them to the different variants of alternative actions of an individual. At the end he recounted the story of Frankfurt‘s evil neurosurgeon as a “weird” example he didn’t find very useful.

Patrick Becker: Naturalizing the mind?

The final speaker for the day was a theologian and in my prejudice I expected pretty confuse magical thinking. I had no idea when he stated, how right I would be. Like some previous speakers, Becker also cited a lot of scholars (obviously a common method in the humanities) like Prinz, Metzinger, or Pauen. Pauen in particular served for the introduction of the terms autonomy and agency as necessary conditions for free will. In this context again, the false dichotomy of either chance or necessity being the only possible determinants of behavior, reared its ugly head. Becker went on to discuss Metzinger’s “Ego-Tunnel” and the concept of self as a construct of our brain, citing experiments such as the “rubber hand illusion“. It wasn’t clear to me what this example was meant to actually say. At the end of all this Becker presented a table where he juxtaposed a whole host of terms under ‘naturalization’ on one side and ‘common thought’ on the other side. The whole table looked like an arbitrary collection of false dichotomies to me and I again didn’t understand what the point of that was. He then picked ethical behavior as an example for how naturalization would lead to an abandonment of ethics. Here, again, the talk was full of false dichotomies such as: our ethics are not rational because some basic, likely evolved moral sentiments exist. As if it were impossible to combine the two. Not sure how that would be an answer to the question of his title. After ethics, he claimed that we would have to part with love and creativity as well if we naturalized the mind. None of what he talked about appeared even remotely coherent to me, nor did I understand how he came up with so many arbitrary juxtapositions of seemingly randomly collected terms and concepts. Similar to creationists, he posits that our empirically derived world-view is just a belief system – he even used the German word ‘Glaube’ which can denote both faith and belief. As if all of this wasn’t bad enough, at the very end, as a sort of conclusion or finale of this incoherent rambling, he explicitly juxtaposed (again!) the natural sciences and religion as equivalent, yet complementary descriptions of the world.

Like this:

Like Loading...
Posted on November 11, 2015 at 17:35 6 Comments
Nov10

Chance in animate nature, day 1

In: science • Tags: causality, chance, interdisciplinary, symposium

Ulrich Herkenrath, a mathematician working on stochasticity, convened a tiny symposium of only about a dozen participants discussing the role of chance in living beings. Participants included mathematicians, philosophers and neurobiologists.

Herkenrath: “Man as a source of randomness”

Herkenrath kicked off the symposium with his own presentation on “Man as a source of randomness”. He explained some principal insights on stochasticity and determinism as well as some boundary conditions for empirical studies on stochastic events, emphasizing that deterministic chaos and stochasticity can be extremely difficult to empirically distinguish.

In a short excursion, he referred to Nikolaus Cusanus, who found that no two subsequent situations can ever be exactly identical, our knowledge being thus essentially conjecture. Apparently, Cusanus was already proposing to falsify hypotheses as a means to approaching ‘truth’. Not surprisingly, he immediately referred to Popper with regards to the modern scientific method. Equally expectedly, when he started talking about kinds and sources of chance, he talked about quantum mechanics.

Moving from inanimate to living nature, he proposed amplifications of quantum chance to the macroscopic level as sources of objective randomness in the mesocosm, always emphasizing the difficulties in distinguishing between such events and events that only seem random due to our limited knowledge.  Contrasting two hypotheses of a deterministic world and one where objective randomness exists, he mentions the illusory nature of our subjective impression of freedom of choice. He never got into the problem that quantum randomness, if only amplified, leaves much to be desired in terms of decision-making. Essentially, he seemed to be arguing that a deterministic world would be a sad place in which he doesn’t want to live, so he rejects a deterministic world. I’ve never found this all too common argument very convincing.

Notably, Herkenrath mentioned that organisms are more than matter. Not sure what to make of this. He defined autonomy as the ability to make decisions that are not determined by the environment. Herkenrath went on to describe classes of decisions, such as subconscious and conscious decisions. How brains make these different forms of decisions will be featured in different talks at the symposium. Herkenrath defined a third class of decisions those that have come about by explicit (subconscious or conscious) randomization. A fourth class is proposed, where a uniform distribution is consciously generated, e.g. a human using a lottery.

Falkenburg: “Causality, Chance and Life”

The second speaker of the first day was Brigitte Falkenburg, author of “Mythos Determinismus” (book critique). She started out wondering how neural determinists understand evolution.

In Falkenburg’s tour de force through the idea history of chance and necessity, we first learned that the concept of chance itself can be traced back to Leibniz, who described events that may have happened otherwise. Leibniz claimed in his metaphysics that objective chance does not exist, as the whole world is rational and determined. According to Leibniz, everything has a sufficient reason. In a very scholarly segue mentioning the dispute between Leibniz and Newton about who invented calculus, she moved to the relationship between the laws of nature and chance. Kant extended Newtons mechanistic laws from the solar system to the entire universe (Kant-Laplace hypothesis). In his “critique of pure reason” Kant later concluded that Leibniz’s ‘sufficient reasons’ are better described as ’causes’ and formulated the principle of causality as an ‘a priori’ of human thinking. This was the start of the demand for causal explanations in the empirical sciences: science never stops asking for causes. However, Kant’s critique did not fully pervade the subsequent thinking, leading instead to Laplace‘s determinism. Laplace was convinced that our insufficient knowledge is the only reason for apparent (subjective) randomness, and a more knowledgeable intelligence would be able to tell the future (cf. Laplace’s demon).

With this backdrop of the idea history of causality, Falkenburg went on to discuss modern concepts of causality away from equating it with determinism. Both Hume and Kant defined causality as a mode of thinking, i.e., psychologically, rather than as a property of the universe. According to them, a causal relationship between events is subjective rather than objective. Mill‘s and Russell‘s positivism later did away with causality as “a relic of a bygone era” (Russell). One argument is that a cause can be seen as just a natural law and the initial state of a system. Deterministic laws are invariant to a reversal of time – as such,causes can also lie in the future.

Today’s philosophical variants of causality concepts reflect this comparatively weak view of causality, which are very different from the way we scientists would intuitively understand it. In a short discussion of the concept of causality in physics, she quickly went through classical mechanics, thermodynamics and quantum mechanics and special relativity, emphasizing that we still do not have a theory unifying these different approaches (she called it ‘patchwork physics’).

Towards the end, Falkenburg discussed the connection between causality and time, emphasizing that the arrow of time cannot have a deterministic basis and all deterministic laws are time reversible. As such, extreme determinism comes with a high metaphysical price: time becomes an illusion. According to Falkenburg, causality is hence not the same as determinism: a causal process is not necessarily deterministic, it can be composed of determinate and indeterminate components. Thus, if you do not think time is an illusion and all possible outcomes coexist, causality does not imply determinism and chance can be a cause as in, e.g. evolution.

At the very end she mentioned Greenfield and the limits of the natural sciences in reducing consciousness to materialism. I’m starting to get the impression that rejecting determinism all too often goes hand in hand with woo peddling. Why is that?

Like this:

Like Loading...
Posted on November 10, 2015 at 18:07 4 Comments
Oct23

Predatory Priorities

In: science politics • Tags: journals, open access, predatory publishing

Over the last few months, there has been a lot of talk about so-called “predatory publishers”, i.e., those corporations which publish journals, some or all of  which purport to peer-review submitted articles, but publish articles for a fee without actual peer-review. The origin of the discussion can be traced to a list of such publishers hosted by librarian Jeffrey Beall. Irrespective of the already questionable practice of putting entire corporations on a black list (one bad journal and you’re out?), I have three main positions in this discussion:

1. Beall’s list used to be a useful tool tracking a problem that nobody really had on their radar. Unfortunately, Jeffrey Beall himself recently opted to disqualify himself from reasoned debate, making the content of the list look more like a political hit list than a serious scholarly analysis. It appears that this approach may still be rescued if it were pursued by an organization more reliable than Beall.

2. There are many problems with publishers that eventually need to be solved. With respect to the pertinent topic, at least two main problem areas spring to mind.

2a. There is a group of publishers which publish the least reliable science. These publishers claim to perform a superior form of peer review (e.g. by denigrating other forms of peer-review as “peer-review light“), but in fact most of the submitted articles are never seen by peers (but instead by the professional editors of these journals). For this minority of articles that are indeed peer-reviewed, acceptance rate is about 40%. Sometimes this practice keeps other scientists unnecessarily busy, such as in replicability projects or #arseniclife. Sometimes this practice has deleterious effects on society, such as the recent LaCour or Stapel cases. Sometimes this practice leads indirectly to human death, such as in irreproducible cancer research. Sometimes this practice leads directly to human death, such as in the MMR/autism case.
These publishers charge the taxpayer on average US$5000 per article and try to use paywalls to prevent the taxpayer from checking the article for potential errors.

2b. There is a group of publishers which similarly claim to perform peer-review but in fact do not perform any peer-review at all. Apparently, it seems as if they aren’t even performing much editorial review. The acceptance rate in these journals is commonly a little more than twice as high as in the journals from 2a, i.e. ~100%. Other than the (likely very few) duped authors, to my knowledge there are no other harmed parties, but I may have missed them.
These publishers charge the taxpayer on average ~US$300 per article and do allow the taxpayer to check the articles for potential errors.

3. Clearly, both 2a and 2b need to be resolved, there can be no debate about that. Given the number and magnitude of issues with regard to infrastructure reform in general and publishing reform in particular, it is prudent to prioritize the problems. Given the larger harm the publishers in 2a inflict on the society at large as well as the scientific community, I would suggest to prioritize 2a over 2b. In fact, looking back over what little we have accomplished over the past 10 years of infrastructure reform, it doesn’t appear we have too many resources left to waste on 2b at this particular time. Moreover, if focusing on 2a were to lead to the demise of the journal container as so many of us hope, 2b will be solved without any further efforts.

Like this:

Like Loading...
Posted on October 23, 2015 at 16:21 37 Comments
Sep17

So many symptoms, only one disease: a public good in private hands

In: science politics • Tags: collective action, journal rank, publishing

Science has infected itself (voluntarily!) with a life-threatening parasite. It has  given away its crown jewels, the scientific knowledge contained in the scholarly archives, to entities with orthogonal interests: corporate publishers whose fiduciary duty is not knowledge dissemination or scholarly communication, but profit maximization. After a 350-year incubation time, the parasite has taken over the communication centers and drained them of their energy, leading to a number of different symptoms. Symptoms for which scientists and activists have come up with sometimes quite bizarre treatments:

  • In the recent #WikiGate, it is questioned if the open encyclopedia Wikipedia should link to (“advertise”) paywalled scientific knowledge at academic publishers such as Elsevier. One argument goes that if Wikipedia articles lack paywalled content and explicitly mention this, pressure on publishers to open the scholarly archives would increase. To solve this issue, open access advocates are now asking Wikipedia editors, who recently received free access to Elsevier’s archives, to assist academic publishers in keeping the paywalled content locked away from the public by not including it in Wikipedia.
  • The Hague Declaration on ContentMining asks for “legal clarity” with regards to science being done on scientific content: access and re-use of scholarly material via software-based research methods is restricted and heavily regulated by academic publishers, leveraging their extensive copyrights over the archives. The Liber open access initiative is now lobbying EU politicians for a “research exception” in international copyright laws to allow unrestricted ContentMining.
  • In recent decades, the number of researchers has been growing such that competition for publications in the few top-ranked journals has reached epic proportions. As a consequence, the amount of work (measured by figure panels or by numbers of authors per article) going into each individual paper has skyrocketed. This entails that the pace of dissemination for each project has been slowing down, not because of any technical or scientific reasons, but merely because of career decisions of scientists. To counteract this trend, it has been suggested to follow the example of physicists, and increase the work-load of scientists: once to publish their results quickly in a readily accessible repository for scholarly communication and once, later, to eventually lock the research behind a paywall in a little-read scholarly top-journal for career advancement.
  • These coveted top-rank journals also publish the least reliable science. However, it’s precisely the rare slots in these journals which eventually help the scientist secure a position as a PI (that’s the whole idea behind all the extra work in the previous example). This entails that for the last few decades, science has preferentially employed the scientists that produce the least reliable science. Perhaps not too surprisingly, we are now faced with a reproducibility crisis in science, with a concomitant exponential rise in retractions. Perhaps equally unsurprisingly, scientists reflexively sprung into action by starting research projects to first understand the size and scope of this symptom, before treating it. So now there exist several reproducibility initiatives in various fields in which scientists dedicate time, effort and research funds to find out if immediate action is necessary, or if corporate publishers can drain the public teat a little longer.
  • Already long before the magnitude of the disease and the number and spread of symptoms had become public knowledge, scientists have come up with two treatments to the symptom of lacking access to scientific knowledge: green and gold open access. Similar to the treatment of slowed down scientific reporting, green open access entails increasing researchers’ overhead by adding scholarly communication as a task on top of career advancement. As it is quite obvious what a scientist will have to choose when faced with choosing one of the two tasks due to limited time, green proponents are asking politicians and funders to mandate deposition in green repositories. The other option, the golden road to open access has now been hijacked by publishers as a way to cut paywall costs from their budget but maintain per-article revenue at similar levels, with the potential to double their already obscene profit margins of around 40%. This model of open access thus entails one of the few ways which is set to make everything worse than it already is. Coincidentally and much to everybody’s chagrin, these two parallel attempts have had the peculiar unintended consequence of splintering the reform movement and seemingly endless infighting. Consequently, the last decade has seen a pace of reform that makes plate tectonics look hurried.

I’ll leave it at these five randomly chosen examples, there are probably many more. While I understand and share the good intentions of all involved and applaud and support their effort, dedication, patience and passion, I can’t help but feel utterly depressed and frustrated by how little we have accomplished. Not counting the endless stream of meetings, presentations and workshops where always the same questions and ideas are being rehashed ad nauseam, our solutions essentially encompass three components:

  1. asking politicians, funders and lately even Wikipedia editors to help us clean up the mess we ourselves have caused to begin with
  2. wasting time with unnecessary extra paperwork
  3. wasting time and money with unnecessary extra research

What is it, that keeps us from being ‘radical’ in the best sense of the word? The Latin word ‘radix‘ means ‘root’: we have to tackle the common root of all the problems and that is the fact that knowledge is a public good that belongs to the public, not to for-profit corporations. The archiving and making accessible of this knowledge has become so cheap, that publishers are now not merely unnecessary, on top of the pernicious symptoms described above, they also increase these costs from what currently would amount to approx. US$200m world-wide per year to a whopping US$10b in annual subscription fees.

I’m not the only one, not even the first to propose taking back the public good from the corporations, as well as the US$10b we spend annually to keep it locked away from the public. If we did that, we would only have to spend a tiny fraction (about 2%) of the annual costs we just saved to give the public good back to the public. The remaining US$9.8b are a formidable annual budget to ensure we hire the scientists with the most reliable results.

This plan entails two initial actions: one is to cut subscriptions to regain access to the funds required to implement a modern scholarly infrastructure. The other is to use the existing mechanisms (e.g. LOCKSS) to ensure the back-archives remain accessible for us indefinitely. As many have realized, this is a collective action problem. If properly organized, this will bring the back-archives back into our control and provide us with sufficient leverage and funds to negotiate the terms at which they can be made publicly accessible. Subsequently, using the remaining subscription funds, the scholarly infrastructure will take care of all our scholarly communication needs: we have all the technology, it just needs to be implemented.  After a short transition period, at least in the sciences, publications in top-ranked journals (to which then only individuals subscribe, if any) will be about as irrelevant for promotion and funding as monographs are today.

This plan, if enacted, would save a lot of money, lives, time and effort and cure publicly funded science of a disease that threatens its very existence. I fear continued treatment of the symptoms will lead to the death of the patient. But which steps are required to make this treatment a reality? How can we orchestrate a significant nucleus of institutions to instantiate massive subscription cuts? How can we solve the collective action problem? These are the questions, to which I do not have any good answers.

Like this:

Like Loading...
Posted on September 17, 2015 at 16:41 57 Comments
Jul20

Evidence-resistant science leaders?

In: science politics • Tags: data, evidence, policy, politicians

Last week, I spent two days at a symposium entitled “Governance, Performance & Leadership of Research and Public Organizations“. The meeting gathered professionals from all walks of science and research: economists, psychologists, biologists, epidemiologists, engineers, jurists as well as politicians, university presidents and other leaders of the most respected research organizations in Germany. It was organized by Isabell Welpe, an economist specializing in incentive systems, broadly speaking. She managed to bring some major figures to this meeting, not only from Germany, but notably also John Ioannidis from the USA or Margit Osterloh from Switzerland. The German participants included former DFG president and now Leibniz president Matthias Kleiner (the DFG being the largest funder in Germany and the Leibniz Association consisting of 89 non-university federal research institutes), president of the German Council for Science and the Humanities, Manfred Prenzel, Secretary General of the Max-Planck Society Ludwig Kronthaler, or the president of Munich’s Technical University, Wolfgang Herrmann, only to mention some of them. Essentially, all major research organizations in Germany were represented with at least one of their leading positions, supplemented with expertise from abroad.

All of these people shape the way science will be done in the future either at their universities and institutions, or in Germany or around the world. They are decision-makers with the power to control the work and job situation for tens of thousands of current and future scientists. Hence, they ought to be the most problem-solving oriented, evidence-based individuals we can find. I was shocked to learn that this was an embarrassingly naive assumption.

In my defense, I was not alone in my incredulity, but maybe that only goes to show how insulated scientists are from the political realities. As usual, there were of course gradations between the individuals, but at the same time there seemed to be a discernible grouping in what could be termed the evidence-based camp (scientists and other professionals) and the ideology-based camp (the institutional leaders). With one exception I won’t attribute any of the instances I will recount to any particular individual, as we better focus on the solutions to the more general prohibitive  attitude, rather than on a debate about the individuals’ qualifications.

On the scientific side, the meeting brought together a number of thought leaders detailing how different components of the scientific community perform. For instance, we learned that peer-review is quite capable of weeding out obviously weak research proposals, but in establishing a ranking order among the non-flawed proposals, it is rarely better than chance. We learned that gender and institution biases are rampant in reviewers and that many rankings are devoid of any empirical basis. Essentially, neither peer-review nor metrics perform at the level we expect from them. It became clear that we need to find solutions to the lock-in effect, the Matthew effect and the performance paradox and to some extent what some potential solutions may be. Reassuringly, different people from different fields using data from different disciplines arrived at quite similar conclusions. The emerging picture was clear: we have quite a good empirical grasp of which approaches are and in particular which are not working. Importantly, as a community we have plenty of reasonable and realistic ideas of how to remedy the non-working components. However, whenever a particular piece of evidence was presented, one of the science leaders got up and proclaimed “In my experience, this does not happen” or “I cannot see this bias”, or “I have overseen a good 600 grant reviews in my career and these reviews worked just fine”. Looking back, an all too common scheme of this meeting for me was one of scientists presenting data and evidence, only to be countered by a prominent ex-scientist with a “I disagree without evidence”. It appeared quite obvious that we do not seem to suffer from a lack of insight, but rather from a lack of implementation.

Perhaps the most egregious and hence illustrative example was the behavior of the longest serving university president in Germany, Wolfgang Herrmann, during the final panel discussion (see #gplr on Twitter for pictures and live comments). This will be the one exception to the rule of not mentioning individuals. Herrmann was the first to talk and literally his first sentence was to emphasize that the most important objective for a university must be to get rid of the mediocre, incompetent and ignorant staff. He obviously did not include himself in that group, but made clear that he knew how to tell who should be classified as such. When asked which advice he would give university presidents, he replied by saying that they ought to rule autocratically, ideally by using ‘participation’ as a means of appeasing the underlings (he mentioned students and faculty), as most faculty were unfit for democracy anyway. Throughout the panel, Herrmann continually commended the German Excellence Initiative, in particular for a ‘raised international visibility’ (whatever that means), or ‘breaking up old structures’ (no idea). When I confronted him with the cold hard data that the only aspects of universities that showed any advantage from the initiative were their administrations and then asked why that didn’t show that the initiative had, in fact, failed spectacularly, his reply was: “I don’t think I need to answer that question”. In essence, this reply in particular and the repeated evidence-resistant attitude in general dismissed the entire symposium as a futile exercise of the ‘reality-based community‘, while the big leaders were out there creating the reality for the underlings to evaluate, study and measure.

Such behaviors are not surprising when we hear them from politicians, but from (ex-)scientists? At the first incidence or two, I still thought I had misheard or misunderstood – after all, there was little discernible reaction from the audience. Later I found out that not only I was shocked. After the conference, some attendees discussed several questions: Can years of leading a scientific institution really make you so completely impervious to evidence? Do such positions of power necessarily wipe out all scientific thinking, or wasn’t all that much of it there to begin with? Do we select for evidence-resistant science leaders or is being/becoming evidence-resistant in some way a prerequisite for striving for such a position? What if these ex-scientists have always had this nonchalant attitude towards data? Should we scrutinize their old work more closely for questionable research practices?

While for me personally such behavior would clearly and unambiguously disqualify the individual from any leading position, relieving these individuals from their responsibilities is probably not the best solution. Judging from the meeting last week, there are simply too many of them. Instead, it emerged from an informal discussion after the end of the symposium, that a more promising approach may be a different meeting format: one where the leaders aren’t propped up for target practice, but included in a cooperative format, where admitting that some things are in need of improvement does not lead to any loss of face. Clearly, the evidence and the data need to instruct policy. If decision-makers will be ignoring the outcomes of empirical research on the way we do science, we might as well drop all efforts to collect the evidence.

Apparently, this was the first such conference on a national level in Germany. If we can’t find a way for the data presented there to have a tangible consequence on science policy, it may well have been the last. Is this a phenomenon people observe in other countries as well, and if so, how are they trying to solve it?

Like this:

Like Loading...
Posted on July 20, 2015 at 21:50 17 Comments
  • Page 11 of 21
  • « First
  • «
  • 9
  • 10
  • 11
  • 12
  • 13
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,801 views)
  • Sci-Hub as necessary, effective civil disobedience (22,938 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,455 views)
  • Booming university administrations (12,900 views)
  • What should a modern scientific infrastructure look like? (11,430 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d