bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 111 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 409 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 680 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1603 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1557 downloads 0.00 KB
Download
Mar12

Even the most thorough peer-review at the ‘best’ journals not up to snuff?

In: science politics • Tags: GlamMagz, peer-review

Talk about egg on face! Nature “the world’s best science” Magazine sets out to publish back-to-back papers on – of all topics – stem cell science. The same field that brought Science Magazine Who-Suk Hwang and Elsevier’s Cell Mitalipov’s ‘errors’. So Nature was warned and, presumably, they got down to business and did the very best they could to prevent Cell‘s and Science‘s mishaps to happen to them. They screened the manuscripts for 9 months, probably requiring some extra experimentation and after this procedure went on to publish the two papers, showing that an unlikely and trivially easy treatment could generate stem cells. And then, perhaps not too surprisingly for people reading this obscure blog, of course the unthinkable happened: barely a week after publication, the first issues were spotted, lanes were spliced in gels, images duplicated. Later, as failures to replicate accumulated, calls for retractions were issued – this time even by one of the papers’ authors who had previously claimed he had reproduced the technique.

What was it that took ‘the internet’ a week that “the world’s best science” magazine could not detect in 9 months? The wisdom of the crowd. There is no evidence to justify the standing of “the world’s leading journals” and the rising tide of post-publication review embarrassing legacy review only corroborates this insight: GlamMagz are undeserving of their status.

Like this:

Like Loading...
Posted on March 12, 2014 at 15:58 29 Comments
Mar07

Interested in testing an RSS reader for science?

In: news • Tags: alert system, feed reader, HeadsUp, RSS

I can now announce the first closed beta testing phase of an RSS reader intended for scientists. So far, we have something like a Feedly clone with a few extras built in, such as collecting the most tweeted articles of the last 24h, some rudimentary ability to sort/filter either feeds or groups of feeds. It’s not a whole lot, yet, so keep your expectations low 🙂 We’re just getting started.

One of the goals of the project is to make this feed reader modular, such that each user can write their own sort/filter/discover algorithm to be plugged into the reader anywhere.

Another goal is to use social technology to allow for following the reading habits of scientists working in relevant fields, à la “readers who have read this article, have also read this one”.

The functionalities we’re thinking of go beyond simple keyword filtering/sorting, however. A long-term goal is to have the reader learn from what we click on, save or recommend and suggest relevant literature from that. For instance,one could think of a selection of feeds from topically highly relevant journals (sel1) and another selection of journals with possibly relevant journal feeds (sel2). The reader would learn from what you click, save and recommend on in sel1, to pick likely relevant content out of sel2 for you.

Again, at the core of the reader is an open architecture that allows the reader to grow with its user base. Scientists are a highly trained and analytical bunch with, collectively, more than enough expertise to come up with a modern information system, that picks the most relevant articles from the roughly 2 million papers published every year. The exponential growth and spectacular success of R is testament to this potential.

So, if you’re interested in contributing to this project by joining a group of about 25 closed beta testers, please email me at bjoern@brembs.net and I will send you instructions on how to join the test. Obviously, if you’d like to contribute by coding, by all means do also let me know!

Like this:

Like Loading...
Posted on March 7, 2014 at 14:17 46 Comments
Mar06

What is the difference between text, data and code?

In: science politics • Tags: code, data, open science, publishing, software

tl;dr: So far, I can’t see any principal difference between our three kinds of intellectual output: software, data and texts.

 

I admit I’m somewhat surprised that there appears to be a need to write this post in 2014. After all, this is not really the dawn of the digital age any more. Be that as it may, it is now March 6, 2014, six days since PLoS’s ‘revolutionary’ data sharing policy was revealed and only few people seem to observe the irony of avid social media participants pretending it’s still 1982. For the uninitiated, just skim Twitter’s #PLoSfail, read Edmund Hart’s post or see Terry McGlynn’s post for some examples. I’ll try to refrain from reiterating any arguments made there already.

Colloquially speaking, one could describe the scientific method somewhat shoddily as making an observation, making sense of the observation and presenting the outcomes to interested audiences in some version of language. Since the development of the scientific method somewhere between the 16th century and now, this is roughly  how science has progressed. Before the digital age,it was relatively difficult to let everybody who was interested participate in the observations. Today, this is much easier. It still varies tremendously between fields, obviously, but compared to before, it’s a world’s difference. Today, you could say that scientists collect data, evaluate the data and then present the result in a scientific paper.

Data collection can either be done by hand or more or less automatically. It may take years to sample wildlife in the rainforest and minutes to evaluate the data on a spreadsheet. It may take decades to develop a method and then seconds to collect the data. It may take a few hours to generate the data first by hand and then by automated processing, but decades before someone else comes up with the right way to analyze and evaluate the data. What all scientific processes today have in common is that at some point, the data becomes digitized, either by commercial software or by code written precisely or that purpose. Perhaps not in all, but surely in the vast majority of quantitative sciences, the data is then evaluated using either commercial or hand-coded software, be it for statistical testing, visualization, modeling or parameter/feature extraction, etc. Only after all this is done and understood does someone sit down and attempts to put the outcome of this process into words that scientists not involved in the work can make sense of.

Until about a quarter of a century ago, essentially all that was left of the scientific process above were some instruments used to make the observations and the text accounts of them. Ok, maybe some paper records and later photographs. With a delay of about 25 years, the scientific community is now slowly awakening to the realization that the digitization of science would actually allow us to preserve the scientific process much more comprehensively. Besides being a boon for historians, reviewers, committees investigating scientific misconduct or the critical public, preserving this process promises the added benefit of being able to reward not only those few whose marketing skills surpass the average enough to manage publishing their texts in glamorous magazines, but also those who excel at scientific coding or data collection. For the first time in human history, we may even have a shot at starting to think about developing software agents that can trawl data for testable hypotheses no human could ever come up with – proofs of concepts already exist. There is even the potential to alert colleagues to problems with their data, use the code for purposes the original author did not dream of or extract parameters from data the experimentalist had not the skill to do. In short, the advantages are too many to list and reach far beyond science itself.

Much like the after the previous hypothetical requirement of proofs for mathematical theorems, or the equally hypothetical requirement of statistics and sound methods, there is again resistance from the more conservative sections of the scientific community for largely the same 10 reasons, subsumed by: “it’s too much work and it’s against my own interests”.

I can sympathize with this general objection as making code and data available is more work and does facilitate scooping. However, the same can be said of publishing traditional texts: it is a lot of work that takes time away from experiments and opens all the flood gates of others making a career on the back of your work. Thus, any consequential proponent of “it’s too much work and it’s against my own interests” ought to resist text publications with just as much fervor as they resist publishing data and code. The same arguments apply.

Such principles aside, in practice, of course our general infrastructure makes it much too difficult to publish either text, data or software, which is so many of us now spend so much time and effort on publishing reform and why our lab in particular is developing ways to improve this infrastructure. But that, as well, does also not differ between software, data and science: our digital infrastructure is dysfunctional, period.

Neither does making your data and software available make you particularly more liable to scooping or exploitation than the publication of your texts does. The risks vary dramatically from field to field and from person to person and are impossible to quantify. Obviously, just as with text publications, data and code must be cited appropriately.

There may be instances where the person writing the code or collecting the data already knows what they want to do with the code/data next, but this will of course take time and someone less gifted with ideas may be on the hunt for an easy text publication. In such (rare?) cases, I think it would be a practical solution to implement a limited provision on the published data/code stating the precise nature of the planned research and the time-frame within which it must be concluded. Because of its digital nature, any violation of said provisions would be easily detectable. The careers of our junior colleagues need to be protected and any publishing policy on text, data or software ought to strive towards maximizing such protections without hazarding the entire scientific enterprise. Also here no difference between text,data and software.

Finally, one might make a reasonable case that the rewards are stacked disproportionately in favor of text publications, in particular with regard to publications in certain journals. However, it almost goes without saying that it is also unrealistic to expect tenure committees and grant evaluators to assess software and data contributions before anybody even is contributing and sharing data or code. Obviously, in order to be able to reward coders and experimentalists just as we reward the Glam hunters, we first need something to reward them for. That being said, in today’s antiquated system it is certainly a wise strategy to prioritize Glam publications over code and data publications – but without preventing change for the better in the process. This is obviously a chicken-and-egg situation which is not solved by the involved parties waiting for each other. Change needs to happen on both sides if any change is to happen.

To sum it up: our intellectual output today manifests itself in code, data and text. All three are complementary and contribute equally to science. All three expose our innermost thoughts and ideas to the public, all three make us vulnerable to exploitation. All three require diligence, time, work and frustration tolerance. All three constitute the fruits of our labor, often our most cherished outcome of passion and dedication. It is almost an insult to the coders and experimentalists out there that these fruits should remain locked up and decay any longer. At the very least, any opponent to code and data sharing ought to consequentially also oppose text publications for exactly the same reasons. We are already 25 years late to making our CVs contain code, data and text sections under the “original research” heading. I see no reason why we should be rewarding Glam-hunting marketers any longer.

UPDATE: I was just alerted to an important and relevant distinction between text, data and code: file extension. Herewith duly noted.

Like this:

Like Loading...
Posted on March 6, 2014 at 21:07 68 Comments
Feb14

How scientific are scientists, really?

In: science politics • Tags: citations, impact factor, statistics

In what area of scholarship are repeated replications of always the same experiment every time published and then received with surprise, only to immediately be completely ignored until the next study? Point in case from an area that ought to be relevant to almost every single scientist on the planet: research evaluation. The first graph I know to show the left-skewed distribution of citation data is from 1997:

Left-skewed citation data from PO Seglen BMJ 1997;314:497.1

PO Seglen, the author of above paper, concludes his analysis with the insight “the journal cannot in any way be taken as representative of the article”.

In our paper reviewing the evidence on journal rank, we count a total of six subsequent (and one prior, in 1992) publications that present the left-skewed nature of citation data in one way or another. In other words, this is an established and often-reproduced fact that citation data are left-skewed. This distribution of course entails that representing it by the arithmetic mean is a mistake that would make an undergraduate student fail their course. Adding to the already long list of replications is Nature Neuroscience, home of the most novel and surprising neuroscience with this ‘unexpected’ graph:
nn0803-783-F1

Only this time, the authors are not surprised, appropriately cite PO Seglen’s 1997 paper and acknowledge that of course this finding is nothing new: “reinforcing the point that a journal’s IF (an arithmetic mean) is almost useless as a predictor of the likely citations to any particular paper in that journal”. Kudos, Nature Neuroscience editors!

What puzzles me just as much as the authors and what prompted me to write this post is their last sentence:

Journal impact factors cannot be used to quantify the importance of individual papers or the credit due to their authors, and one of the minor mysteries of our time is why so many scientifically sophisticated people give so much credence to a procedure that is so obviously flawed.

In which other area of study does it take decades and countless replications before a basic fact is generally accepted? Could it be that we scientists, perhaps, are not as scientifically sophisticated as we’d like to see ourselves? Aren’t we, perhaps, equally dogmatic, lazy, stubborn and willfully ignorant as any other random person from the street? What does this persistent resistance to education say about the scientific community at large? Is this not an indictment of the gravest sort as to how the scientific community governs itself?

Like this:

Like Loading...
Posted on February 14, 2014 at 18:23 6 Comments
Feb13

Two evolutionary conserved, fundamental learning mechanisms

In: own data • Tags: behavior, classical, conditioning, evolution, neuroscience, operant, self-learning

At this year’s Winter Conference on Animal Learning and Behavior, I was invited to give the keynote presentation on the relationship between classical and operant conditioning. Using the slides below, I argued that Skinner already had identified a weakness in his paradigm as early as 1934, when he was discussing this relation in the scientific literature at the time. I went on to explain how neuroscientists using invertebrate model systems had since been able to overcome this weakness.

Drawing from evidence in the marine snail Aplysia and the fruit fly Drosophila, I detailed some of the physiological and biochemical mechanisms underlying learning in a ‘pure’ implementation of operant conditioning, without the confounding variable identified by Skinner more than seventy years before. These mechanisms reveal the network, physiological and biochemical differences between forms of learning that are concerned with events in the world around the organism (world-learning), and those that are concerned with events that originate in the organism itself (self-learning).

These two forms of learning, world- and self-learning constitute two fundamental, evolutionary conserved learning mechanisms among a growing inventory of processes involved in memory formation.

Pavlovian and Skinnerian Processes are Genetically Separable from Björn Brembs

Like this:

Like Loading...
Posted on February 13, 2014 at 00:39 Comments Off on Two evolutionary conserved, fundamental learning mechanisms
Feb06

Hiding the shoulders of giants?

In: science news • Tags: behavior, brain, neuroscience, operant, variability

“Standing on the shoulders of giants” is what scientists say to acknowledge the work they are building on. It is a statement of humility and mostly accompanied by citations to the primary literature preceding the current work. In today’s competitive scientific enterprise, however, such humility appears completely misplaced. Instead, what many assume to be required is to convince everyone that you are the giant, the genius, the prodigy who is deserving of the research funds, the next position, tenure.

Facilitating this trend are journals who actively contribute to the existing institutional incentives for such hype by claiming to publish “the world’s best science” or “the very best in scientific research” and by simultaneously allowing only very few citations in their articles.

Thus, it should not come to anybody’s surprise that we find more and more articles in such journals , which claim that they found something unique, novel and counterintuitive that nobody has ever thought of before and that will require us to re-write the textbooks.

Point in case is the combo of the article entitled “Temporal structure of motor variability is dynamically regulated and predicts motor learning ability” with its accompanying news-type article (written by scientists). Both articles claim that the researchers have made the game-changing discovery that something long though to be a bug in our movement system is actually a spectacular feature. It is argued that this is such a huge surprise, because nobody in their right mind would have ever thought this possible. Their discovery will revolutionize the way we think about the most basic, fundamental properties of our very existence. Or something along those lines.

Only that probably most people in the field thought it should be obvious.

Skinner is largely credited with the analogy of operant conditioning and evolution. This analogy entails that reward and punishment act on behaviors like selection is acting on mutations in evolution: an animal behaves variably and encounters a reward after it initiated a particular action. This reward will make the action now more likely to occur in the future, just as selection will make certain alleles more frequent in a population. Already in 1981, Skinner called this “Selection by Consequences“. Skinner’s analogy sparked wide interest, e.g. an entire journal issue, which later appeared in book form. Clearly, the idea that reinforcement selects from a variation of different behaviors is not a new concept at all, but more than three decades old and rather prominent. This analogy cannot have escaped anybody working on any kind of operant learning, except they are seriously neglecting most relevant literature.

Elementary population genetics shows that the rate of evolution is proportional to the rate of mutation. This means that the more variants a population has to offer, the higher the rate of evolution will be. This, as well, is very basic and known since decades past.

It is thus no surprise that, for instance, Allen Neuringer has been studying the role of variability in operant conditioning for decades and also our own lab is studying the neurobiological mechanisms underlying behavioral variability. It’s a well-known and not overly complicated concept, so of course people have been studying various aspects of it for a long time. What was always assumed, but never explicitly tested, to my knowledge (but see update below!), is the relation between behavioral variability and learning rate. Does the analogy hold such that increased behavioral variability leads to increased operant learning rates, just like increased mutations rates lead to increased rates of evolutionary change?

Now, the authors of the research paper find that indeed, as assumed for so many decades, the rate of learning in operant conditioning is increased in subjects where the initial variability in the behavior is higher. This is, at least to me, a very exciting finding: finally someone puts this old assumption to the test and demonstrates that yes, Skinner got something right with his analogy. To me, this alone is worth attention and media promotion. Great work, standing on the shoulders of a giant, Skinner. This is how science should work, great job, Wu et al.! However, this was apparently not good enough for the authors of these two articles.

Instead of citing the wealth of earlier work (or at least Skinner’s original 1981 article), the authors claim that their results were surprising to them: “Surprisingly, we found that higher levels of task-relevant motor variability predicted faster learning”. The authors of the news-type article were similarly stunned: “These results provide intriguing evidence that some of the motor variability commonly attributed to unwanted noise is in fact exploration in motor command space.”

The question is of course, if this is ignorance on the part of the (seven in total) involved authors or a publication strategy, perceived to be superior to the “standing on the shoulder of giants” approach (and what a giant Skinner is!). It is, of course, moot to speculate about motives or competence without asking the authors directly. Perhaps there is even a third way besides incompetence or hype that I’m not aware of.

I don’t know any of the authors personally, so I decided to ask a mutual friend, John Krakauer, one of the leading experts in this field and whose papers the authors cite, what he thought about these articles. Specifically, I asked him what he thought about the citation of his article as a reference for the surprising nature of their finding:

Until now, motor variability has been viewed as an unwanted feature of movements, a noise that the brain is able to reduce only with practice8.

In his reply, he corrects the authors:

It is true that in our paper we were focused on variability as something that needs to be reduced when best performance is required. That said, in the discussion we explicitly mention that variability can also be used for exploration. As an example of this distinction, we mention the difference in variability between when songbirds are rehearsing their song versus when they must perform perfectly for their mate.

Apparently, at least one of the cited authors finds this citation not to be in order. With regard to the original article, John wrote: “Given that we posited that there is an operant component in error-based adaptation in 2011, I’m glad to see that their results are consistent with this view.”

It appears to me that the authors may know the relevant literature and selectively cite it in order to make their research results appear more earth-shattering and novel. If that were the case, it is up to anyone to speculate what the motivations behind this strategy were. In the best of all worlds, the authors do know their ways around the more modern literature of their specific subfields, but are unaware of the historical work in their field nor of the relevant work in related fields. In this case, the two papers are prime examples of the insularity of some fields of inquiry and demonstrate how more deep and interdisciplinary reading/training could improve the isolation of highly specialized fields in science. That being said, the authors being unaware of such a prominent concept at the heart of their method would constitute an indictment in its own right, at least in my books. Then again, one can never be sure to have read all the relevant literature and perhaps this can happen even to the best of us?

The news-type article doubling down on the hype reveals another aspect that has been worrying me for some time now. Given that the most important factor for a manuscript to be published in the most high-ranking journals is to get past the professional editor, the ensuing peer-review is likely to be biased in favor of publishing the paper. For one, the reviewers being experts in the field, they can cite the resulting paper and make their own research look hotter. Moreover, if the manuscript is published, it offers the chance of padding their resume with such fluff news-type articles and get (or keep) their own name out and associated with the big journals. Obviously, any publication in such high-ranking journals benefits not only the authors themselves, but also the field at large, creating a whole new set of incentives for peer-reviewers and authors of news-type articles.

UPDATE, February 10, 2014:

Allen Neuringer just sent me one of his papers in which he showed that training rats to be highly variable enabled them to learn a very complicated operant task, when animals trained to be only moderately variable failed to learn the task or did do only very slowly: Psychonomic Bulletin & Review 2002, 9 (2), 250-258. In contrast, Doolan and Bizo (2013) have tried to duplicate these findings in humans and failed. Thus, the principle behind the experiments of Wu et al. have already been tested and established in a mammalian model, just not in humans. Thus, it’s still good to see that humans are no exception in these processes, no doubt about that, but surely there is nothing revolutionary or even surprising about this work. On the contrary, based on work in rats, and from predictions many decades ago, the results presented by Wu et al. are precisely what we would have expected. Needless to say, the authors cite neither Skinner nor Neuringer.

Like this:

Like Loading...
Posted on February 6, 2014 at 18:12 16 Comments
Feb03

In support of subscripton cancellations

In: science politics • Tags: libraries, publishers, publishing

The recent call for a GlamMag boycott by Nobel laureate Randy Shekman made a lot of headlines, but will likely have no effect whatsoever. For one, the call for boycott isn’t even close in scale to “the cost of knowledge” boycott against Elsevier and even that drew less than 15,000 measly signatures, a drop in the bucket with 970,000 board members, reviewers and authors working for Elsevier largely for free. Any boycott movement that fails to reach 500,000 signatures is an abject failure. Moreover, even if he had half a million signatures on the GlamMag boycott, it would also be a drop in the bucket, as probably more than ten times as many scientists would simply see their chances of getting a GlamMag publication increase and try even harder to publish there. Furthermore, Shekman only pleaded to ethical sentiments, when it’s quite apparent that such pleas will fall on deaf ears if livelihoods are at stake – which they are as GlamMag publications are perceived to put careers on entire different levels. Shekman failed to base any of his arguments in data and evidence, of which there is plenty, and so his pleas will likely fade unheeded. And as if all this wasn’t enough to lose confidence in the effectiveness of this boycott, there is the obvious conflict of interest with Shekman, as the editor-in-chief of another “luxury journal” pleading for his colleagues to leave the legacy “luxury journals” to publish their work where – in Shekman’s self-professed “luxury journal” eLife?

Devoid of evidence and replete with conflict of interest and at least perceived hypocrisy, as much as I’d want it to be successful, I fear this was the first and last time we heard of this boycott.

Nevertheless, this is just the last in a row of different calls to collective action to show the Evilseviers of science publishing who’s boss over the last decade or so. Moreover, new publishing venues are springing up all over the place and scientists are flocking to them with their publications. The media are picking up on the momentum that publishing reform is currently garnering and increasing. It really does seem as if there is now, after more than a decade, something actually shifting in academic publishing.

In the string of public action, campaigns and stunts, one thing was notably missing: a call to boycott where it would really hurt publishers: cutting subscriptions. The only thing close to this was the threat of boycotting Nature Publishing Group by the University of California system in 2010. That never happened essentially because NPG caved in. Such a boycott, if actually enacted, would certainly put the spotlight on publishing reform as it would get several stakeholders moving at once:

  • Publishers would feel the pinch not in their public image, but in their balances
  • Scientists would consider twice to publish in a journal that hardly anybody can read
  • Given the high cost of subscriptions, huge funds would be freed in the institutions to develop a digital infrastructure that would make publishers obsolete and save a pretty penny along the way
  • An effective boycott of the most expensive publishers would also drive down subscription costs in the remaining corporations as they compete for the remaining subscription funds
  • If such a boycott went into effect, it would actually constitute a significant short-term sacrifice to researchers who would have even more trouble reading some of their literature for a certain period of time, sending an even stronger signal of resolve than some 30k signatures on a website

However, I project that such campaigns are unlikely to find much support, as it would require libraries and their faculty to actually sit down on the same table and defend their common institutions. Why does something so seemingly easy have to be so difficult?

Like this:

Like Loading...
Posted on February 3, 2014 at 23:28 31 Comments
Jan28

Citation inflation and incompetent scientists

In: science politics • Tags: citations, impact, metrics

The other day I was alerted to an interesting evaluation of international citation data. The author, Curt Rice, mentions a particular aspect of the data:

In 2000, 25% of Norwegian articles remained uncited in their first four years of life. By 2009, this had fallen to about 15%. This shows that the “bottom” isn’t pulling the average down. In fact, it’s raising it, making more room for the top to pull us even higher.

The context here is that the “bottom” refers to scientific articles that aren’t cited, assuming that no citations mean low scientific quality of that article. Leaving aside the very tentative and hotly debated connection between citations and ‘quality’ (whatever that actually means*) in general, let’s look at just the specific rationale that a decrease in the fraction of articles that go uncited should indicate an increase in the overall quality of research.

To make my case, I’ll take the perspective of someone who strongly believes that the rejection rate of a journal is indicative of that journal’s ‘quality’ (i.e, a high rejection rate makes sure that only the world’s best science is being published). From that perspective, a decreasing fraction of papers remaining uncited is just as bad for science as decreasing rejection rates: surely, just as not every paper can be good enough to deserve to be published, even fewer would be good enough to deserve to be cited? An increasing number of articles with any citations at all can thus only mean one thing: the Dunning-Kruger Effect has come to science. We have now let so many incompetent people join the scientific enterprise, that they cannot discern between good science and bad science any more and cite even the worst bottom of the scientific literature, as if it was even worth paying attention to. As a consequence of the rise of these bulk-publishing new journals which flood the scholarly literature with crap, articles get published who would never have gotten published before and in their authors’ unwitting incompetence, they cite each other out of sheer ignorance. With 80% of all submitted manuscripts being junk, clearing what passes for peer-review these days ceases to be an indicator of quality and with almost all papers being cited, citations has become useless, too.

This may sound outrageous to people who visit this obscure blog, but if you follow the links in the paragraph above, you’ll find precisely this condescension and arrogance.

 

* Obviously, a more easily defensible relation were that between citations and utility

Like this:

Like Loading...
Posted on January 28, 2014 at 20:21 4 Comments
Dec10

[updated]Churchland: “Ignore science and live a make-believe life!”

In: science news • Tags: brain, free will, neuroscience, slate, spontaneity

UPDATE: I’m still not completely sure I understood Churchland accurately, but there are comments suggesting I may just have completely misunderstood her replies to the interviewer. Read the interview and judge for yourself!

I really like Patricia Churchland. I’ve read some of her articles, know a little about her conceptual framework of ‘neurophilosophy’, really enjoyed her recent interview with Kerri Smith on Neuropod and so on. I feel that we agree on pretty much anything and I think I’d like her book she’s promoting in recent interviews, if I were to read it. But this morning a read a statement from her, which I found very hard to understand. The statement appears in an interview for the New Scientist, re-published on Slate: After she states that it’s causal links all the way back to the big bang and nothing we do has any impact on that, she states:

If you are crippled by the thought that it is causality all the way back, you have essentially made a decision to make no decisions. That is very unwise. If by thinking that free will is an illusion you believe that it does not matter whether you acquire good habits or bad, hold false beliefs or true, or whether your evaluation of the consequences of an option is accurate or not, then you are highly likely to make a right mess of your life.

Apart from the slight non-sequitur of first saying what we decide makes no difference whatsoever as it’s all determined anyway and then turning around and warn of the consequences bad decisions can have, what I find particularly surprising is that she apparently doesn’t find it troubling at all to tell people to completely ignore that science she thinks tells them to quit worrying about the consequences of their actions. I could paraphrase her sentence above “It’s very unwise to let your life be crippled by pesky scientific facts!” In her words, taking science at face value and derive a self-consistent world-view from scientific data amounts to “metaphysical goofiness”. Do I understand her correctly that we should simply make up our own world and behave in it as we see fit, scientific evidence be damned? Or, more sinister, only reference scientific evidence if it suits your world-view and if it contradicts it, it’s ok to ignore it?

Now, given the scientific evidence, there is no justification for a belief in determinism anyway. In that respect, determinism is like a religion: scarce evidence, if any, but many adherents nonetheless. As such, there really is no need to embrace non-sequiturs or to ignore scientific evidence: science leaves us plenty of elbow-room to allow for a scientific concept of free will (recent article pretty much along the same lines). It is thus very surprising that Churchland not only doesn’t follow the evidence on determinism, but moreover appears to suggest to ignore evidence if it doesn’t correspond with the way we’d like to live.

I’m not quite sure what to make of all that.

Like this:

Like Loading...
Posted on December 10, 2013 at 10:26 7 Comments
Nov20

The Achilles’ heel of open access mandates

In: science politics • Tags: infrastructure, mandates, open access, politicians, publishers

Luckily, there are many roads to open access to publicly funded research. Currently, none of them are really sustainable by themselves, but in cooperation, they keep pushing for more open access and very successfully so. In a hypothetical forced choice situation, I’d probably favor immediate, non-embargoed ‘green’ deposition in institutional repositories over any of the other routes: it does not require anyone to abandon toll-access journals that still today make or break careers (absurd as this reality may be) and doesn’t require any additional funds out of the declining grant budget, only to mention two of several advantages.

However, essentially only physicists have developed any sort of deposition culture (in arxiv). For most other fields, ‘green’ deposition mandates are required to get the repositories filled to any reasonable level. It is precisely these mandates which are the Achilles’ heel of the green route: mandates are policies implemented by funders and most funders are government branches or at least heavily government-influenced. This means that it matters what politicians think about such mandates. If they become convinced that green mandates are irrelevant, or even a bad idea, they won’t be implemented. Obviously, publishers with their huge profits are in a much better position to buy access to politicians than us researchers who generate the literature in the first place. The willingness of publishers to use the profits derived from our work against us can be observed again and again: for instance by hiring Eric Dezenhall in the PRISM initiative to sway public opinion against open access. Or by paying two US lawmakers for drafting legislation that would make green mandates illegal: the research works act. The latest efforts can be seen in section 302 of the discussion draft of the Frontiers in Innovation, Research, Science and Technology Act of 2013 (FIRST) in the US. This section aims to enlarge green embargoes (the time before an article in a green repository becomes publicly accessible) to a whopping three years.

Thus, one more reason to develop an institutional infrastructure that covers our text, data and software needs is the independence from outside forces: if institutions decide to take care of their texts, data and software needs themselves (and save a few billions every year as a fringe benefit), there is nothing publishers or politicians can do to interfere with that process.

Like this:

Like Loading...
Posted on November 20, 2013 at 14:35 46 Comments
  • Page 15 of 21
  • « First
  • «
  • 13
  • 14
  • 15
  • 16
  • 17
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,800 views)
  • Sci-Hub as necessary, effective civil disobedience (22,938 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,453 views)
  • Booming university administrations (12,903 views)
  • What should a modern scientific infrastructure look like? (11,433 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d