bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 170 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 88 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 196 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 502 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 748 downloads 0.00 KB
Download
Jun17

Your university is definitely paying too much for journals

In: science politics • Tags: costs, journals, pricing, publishers, publishing, subscriptions

There is an interesting study out in the journal PNAS: “Evaluating big deal journal bundles“. The study details the disparity in negotiation skills between different US institutions when haggling with publishers about subscription pricing. For Science Magazine, John Bohannon of “journal sting” fame, wrote a news article about the study, which did not really help him gain any respect back from all that he lost with his ill-fated sting-piece. While the study itself focused on journal pricing among US-based institutions, Bohannon’s news article, where one would expect a little broader perspective than in the commonly more myopic original papers, fails to mention that even the ‘best’ big deals are grossly overcharging the taxpayer. Here is the figure of the article, apparently provided by the PNAS authors:

Journal subscription prices

This graph shows that some universities pay more for subscriptions than others. I’m not sure what exactly -130% is supposed to mean. I take it that UMass didn’t receive money from Springer, but still paid $168,224. So I take this graph to mean that there are differences of up to 200% between what libraries are paying publishers, i.e, one university may pay up to 200% on top of what another library is paying for the same content, e.g. when one pays one million, another has to pay three. I’m not entirely sure that this is the correct reading of the Y-axis, but it’s the best I can do for now.

Being charged 200% more than other libraries for the same service may hurt, but consider what we would be paying if we wouldn’t use publishers, but instead published all our papers in a system like SciELO:

Comparison between legacy subscription publishers and SciELO in US$ prices per article published.

Comparison between legacy subscription publishers and SciELO in US$ prices per article published.

According to a Nature article citing Outsell, we currently pay US$5,000 per article to prevent public access to it, while the overall cost of a publicly accessible article in SciELO is only US$90. Try to explain that to a taxpayer on the street: you pay $5,000 for each article you’re not allowed to read, instead of just $90 for each article you could read. In the light of such numbers, it is a sign of a truly warped perspective when people can still discuss a few percentage points more or less for what they pay to block public access to research. Because this is what libraries do by paying subscription fees: they pay to block public access to research.

Be that as it may, if I were to calculate any percentages from these differences, I could say that subscriptions are in excess of 5000% more expensive than SciELO or that SciELO would only cost institutions 1.8% of what they are currently paying for the same service, or that we are overpaying legacy publishers by 98.2%. So either way you see it, we could pay less than 2% of the current cost or are currently paying more than 5000% too much – compared to these figures, the 200% seems like a totally negligible number to me. In the words of Science Magazine: no matter what your university paid for subscriptions, they definitely got a horrible deal – even if it was the best deal in the country.

Nevertheless, given the effective distraction machine that Science Magazine is turning into, I expect people will discuss the irrelevant 200% much more extensively, than the crucial 1.8% or 5000%.

What we should instead discuss is the following:

Why are we paying to block public access to research, when we could save billions by allowing access?

Like this:

Like Loading...
Posted on June 17, 2014 at 14:44 104 Comments
May08

If you comment online, you’re on stage

In: news • Tags: contrarians, Frontiers, science denialism, unpersuadables

Apparently, the outrage of science denialists over their exposure in a recent psychological paper shows no signs of abating. It was denialists’ complaints and legal threats of libel/defamation suits that started the investigation of the paper and also in the comments to my post announcing my resignation as editor for Frontiers, the denialists complained that their public blog comments were used in a scientific paper. Blog responses by Henry Markram, editor-in-chief of Frontiers, confirmed my decision to resign: essentially, he sided with the denialists and opined that public comments were not fair sources for psychological study.

Let’s stop for a moment and ponder if there are some analogous offline scenarios to taking a public online comment and analyzing it.

Literature springs to mind: every literature department at every university takes published words and analyzes them. Apparently, Markram and the science denialists think this should all be abolished, or at least that it is a questionable practice which ought to be better regulated. Perhaps they think that literature departments should study literature without mentioning the authors? Once literature departments are up for grabs, why stop there? Why not prohibit political analysts from telling the public about their politicians? Obviously, you’d start with those analysts unfavorable of the ruling politicians. Why not fire all music critics from newspapers and magazines (those that still have such employment, that is)? Heck, isn’t “American Idol” or “America’s Next Topmodel” and all the other casting shows exactly analogous: taking a public performance and scrutinizing it publicly? It’s perhaps worth reminding everybody that online comments are public performances, like it or not.

In essence, what Lewandowski et al. have done in their ‘recursive fury’ paper is in more than a few ways akin to what the jury does in casting shows. They’ve been the jury when the science denialists went up on stage to sing and dance. If that had actually happened offline, maybe Lewandowski et al.’s jury comments might have gone like this:

“When you sing, it sounds like the quaking of a duck!”

“When you dance, you have the grace and elegance of an antelope – no, wait, what was the name of that animal with the trunk again?”

“You are seriously coyote ugly!”

After it occurred to them what fools they had made of themselves on stage, the denialists went to the TV station airing the show (Frontiers) to complain that broadcasting their embarrassing performance with the negative jury comments were defamatory. Obviously, in the real world, the TV channel people would have ROTFLTAO. In science publishing, Frontiers caved in and axed the broadcast.

Morale of the story: if you can’t take the consequences, don’t get up on stage.

Like this:

Like Loading...
Posted on May 8, 2014 at 12:57 12 Comments
Apr17

Conflicts of interest even for ‘good’ scholarly publishers

In: science politics • Tags: libraries, open access, publishers, publishing

Thinking more generally about the “Recursive Fury” debacle, something struck me as somewhat of an eye opener: the lack of support for the authors by Frontiers and the demonstrative support by their institution, UWA (posting the retracted article). Even though this might be the first time a scholarly journal caved in to legal pressure from anti-science groups, it should perhaps come as no surprise. Ugo Bardi made a very valid point when he recently wrote:

The problem, here, is […] that we are stuck with a century old model of communication: expensive and ineffective and, worse, easily subverted by special interest groups

I disagree with his suspicion that Frontiers is a Ponzi scheme, as I quite like the federated structure of the enterprise: we are thousands of scientists and our work needs to be reviewed by thousands of scientists. Any system we might come up with for scholarly communication will, by necessity, be gigantic. But his insight quoted above really deserves special attention and ought to be a thought provoker for anybody in our business.

Any publisher always has an inherent conflict of interest: whether it is the GlamMag hyping cold fusion, stem cells or the latest flashy social psychology experiment to sell subscriptions, or a fledgling start-up that sees their venture going down a legal drain or the idealistic non-profit trying to get some more papers to hire one more developer for the next great innovation: for all of them, the financial viability of the enterprise comes before science. This conflict of interest is usually not a major issue, but does come up enough to make me worry if this is really a good idea – especially in this day and age, when digital publications cost virtually nothing.

As mentioned above, our own institutions obviously do not have this conflict of interest, on the contrary, they are the reasons for the existence of professional scientists. They can host our papers, when publishers, even the ‘good guys’ like Frontiers, cannot.

Interestingly, just a few weeks earlier, Richard Poynder, after many years of covering the open access movement, had already gotten me started thinking along those lines, noting:

I believe the movement made a mistake in allying itself with OA publishers. What it failed to appreciate is that publishers’ interests are not the same as the interests of the research community.

Another piece of evidence of these conflicts of interest is the constant struggle for the kind of licenses attached to articles by OA journals. Clearly, liberal re-use licenses are in the best interest of the one paying the bills, the tax-payer. Publishers obviously do not share these interests (neither do some authors, btw.). And so, there are constant attempts by various publishers to gain more control over our works, even if they are accessible for anyone to read.

These recent events have triggered the suspicion that maybe the entire concept of scholarly publishers is antiquated, irrespective of how open, innovative or non-profit the publisher is. In addition to the inevitable conflicts of interest, none of the publishers are seriously considering all three of our intellectual outputs: code, data and texts. They are only after our text summaries, i.e., our papers. The result being, in an age of ever sinking costs of making digital objects public, that we overpay publishers by so much, that no money is left for our institutional infrastructure serving our three output modalities. Thus, even if the conflicts of interest were not an issue, separating the fruits of our intellectual labor not only into tens of thousands of journals, but also into separate, non-interoperable silos for code, data and text makes absolutely no sense at all, given today’s technology and is outright insanity given tomorrow’s technology.

Maybe I should resign from all my volunteer positions with publishers?

Like this:

Like Loading...
Posted on April 17, 2014 at 18:22 10 Comments
Apr09

Recursive fury: Resigning from Frontiers

In: news • Tags: contrarians, delusionals, unpersuadables

Last month, I was alerted to an outrageous act of a scientific journal caving in to pressure from delusionals demanding the science about their publicly displayed delusions be hidden from the world: the NPG-owned publisher Frontiers retracted a scientific article, with which they could not find anything wrong: The article

attracted a number of complaints which were investigated by the publisher. This investigation did not identify any issues with the academic and ethical aspects of the study. It did, however, determine that the legal context is insufficiently clear and therefore Frontiers wishes to retract the published article.

Essentially, this puts large sections of science at risk. Clearly, every geocentrist, flat earther, anti-vaxxer, creationist, homeopath, astrologer, diviner, and any other unpersuadable can now feel encouraged to challenge scientific papers in a court. No, actually, they don’t even have to do that, they only have to threaten court action and publishers will cave in and retract your paper.

As if we needed any more evidence that publishers are bad for science.

Now even the supposedly “good guys” show that the are not really on the side of science. Instead of at least waiting for a law suit to be filed and perhaps at least attempting to stand their ground (as Simon Singh did), they just took the article down in what can only be called anticipatory obedience. This is no way to serve science.

A week or two ago, I talked with a Frontiers representative on the phone and she explained a few things to me which prompted me to read the paper in question, so I could make up my own mind. After reading the paper, any of the attempted explanations on the phone rang hollow: I’m certainly not a lawyer, but if taking publicly posted comments and citing them in a scientific paper, discussing them under a given hypothesis which has a scientific track record and plenty of precedence constitutes a cause for libel or defamation lawsuits, it is certainly the law and not the paper which is at fault. It is quite clear, why the content of the paper may feel painful to those cited in it, but as long as “conspiracist ideation” is not an official mental disorder, I cannot see any defamation. If you don’t want to be labeled a conspiracy theorist, don’t behave like one publicly on the internet. Therefore, after reading the paper, in my opinion, Frontiers ought to have supported their authors just as their home institution (UWA) is supporting them as their employees.

As the Frontiers representative did not disclose any details and what she was able to disclose was both very general, hence not very convincing, and I promised not to disclose even that, one can only speculate what the motivations and considerat1ions might have been at Frontiers as to why they decided to throw their authors under the bus.

Clearly, if legal problems are cited, it’s always money that’s at stake, I’d be surprised if this were controversial. I have heard through the grapevine that Frontiers apparently may have felt some pressure recently, to make more money, to publish more papers. I was told that they have sent out literally millions of spam emails to addresses harvested from, e.g. PubMed, soliciting manuscript submissions. Obviously, a costly libel or defamation suit in the UK would not have been a positive on the balance sheets.

Alas, as much fun all of this speculation may be, it is not really relevant to my conclusion: Frontiers retracted a perfectly fine (according to their own investigation) psychology paper due to financial risks for themselves. It can only be seen as at best a rather lame excuse or at worst rather patronizing, if Frontiers were to claim to be protecting their authors from lawsuits by removing the ‘offending’ article. This is absolutely no way to “empower researchers in their daily work“. In the coming days I will send resignation letters to the Frontiers journals to which I have donated my free time for a range of editorial duties. Obviously, I will complete the tasks I have already started, but I will not accept any new tasks at Frontiers – at least not until they show more support of their authors.


P.S.: I should perhaps add that the reason I supported Frontiers almost since its inception was that they were and in many respects still are among the most innovative publishers out there and that they drive our communication system away from the entirely antiquated status quo. Of course, Frontiers still serves this particular function very well. My criticism very specifically targets this particular paper and leaves all the other positive contributions of Frontiers to our publishing ecosystem intact. I guess that much of my personal disappointment comes from the feeling of betrayal, when I felt Frontiers was on the side of researchers for so many years. I would have expected such behavior from legacy publishers, but not from Frontiers. This incident, together with several other events over the past month or two have prompted me to think more generally about my involvement with publishers and there will be another post on this topic at some point.

Like this:

Like Loading...
Posted on April 9, 2014 at 18:11 150 Comments
Mar12

FIRST: the Research Works Act all over again

In: science politics • Tags: FIRST, lobbyism, polticians, publishers, publishing, RWA

Do you remember the RWA? It was a no-brainer already back then that the 40k that Elsevier spent was well-invested: for months, Open Access activists were busy derailing this legislation, leading a virtual standstill on all other fronts. now, just over two years later, two Republican representatives introduced the Frontiers in Innovation, Research, Science and Technology (FIRST) Act. According to SPARC:

This provision would impose significant barriers to the public’s ability to access the results of taxpayer-funded research.  Section 303 of the bill would undercut the ability of federal agencies to effectively implement the widely supported White House Directive on Public Access to the Results of Federally Funded Research and undermine the successful public access program pioneered by the National Institutes of Health (NIH) – recently expanded through the FY14 Omnibus Appropriations Act to include the Departments Labor, Education and Health and Human Services.

The two sponsors of the Bill are Chairman Lamar Smith (R-TX) and Rep. Larry Bucshon (R-IN). Not surprisingly, both sponsors are backed up by publisher funding: Lamar Smith receives annual contributions from Elsevier and other publishers. Both sponsors received contributions from a large number of scholarly (primarily medical) associations that also publish their own subscription journals. Some of these contributions were on the order of several tens of thousands of dollars. Among these scholarly societies were:

Society Journal(s) published by
American Medical Association JAMA network AMA
American Society of Anesthesiologists Anesthesiology Wolters Kluwer
College of American Pathologists Archives of Pathology & Laboratory Medicine Allen Press
American College of Radiology Journal of the American College of Radiology Elsevier
Society of Thoracic Surgeons Annals of Thoracic Surgery Elsevier
American Academy of Orthopaedic Surgeons Journal of the American Academy of Orthopaedic Surgeons

The Journal of Bone and Joint Surgery

Orthopaedic Knowledge Online Journal

Highwire Press

Kent R. Anderson

AAOS

Of course, nobody knows how much influence their contributions bought these contributors, but this short list already reads like a who’s who of corporate publishers with a track record of lobbying against public access to public research. One cannot exclude that it is a pure coincidence that these two politicians with a track record of publisher contributions are now drafting a publisher-friendly legislation – and thereby doing the public a disservice.

Like this:

Like Loading...
Posted on March 12, 2014 at 23:21 28 Comments
Mar12

Even the most thorough peer-review at the ‘best’ journals not up to snuff?

In: science politics • Tags: GlamMagz, peer-review

Talk about egg on face! Nature “the world’s best science” Magazine sets out to publish back-to-back papers on – of all topics – stem cell science. The same field that brought Science Magazine Who-Suk Hwang and Elsevier’s Cell Mitalipov’s ‘errors’. So Nature was warned and, presumably, they got down to business and did the very best they could to prevent Cell‘s and Science‘s mishaps to happen to them. They screened the manuscripts for 9 months, probably requiring some extra experimentation and after this procedure went on to publish the two papers, showing that an unlikely and trivially easy treatment could generate stem cells. And then, perhaps not too surprisingly for people reading this obscure blog, of course the unthinkable happened: barely a week after publication, the first issues were spotted, lanes were spliced in gels, images duplicated. Later, as failures to replicate accumulated, calls for retractions were issued – this time even by one of the papers’ authors who had previously claimed he had reproduced the technique.

What was it that took ‘the internet’ a week that “the world’s best science” magazine could not detect in 9 months? The wisdom of the crowd. There is no evidence to justify the standing of “the world’s leading journals” and the rising tide of post-publication review embarrassing legacy review only corroborates this insight: GlamMagz are undeserving of their status.

Like this:

Like Loading...
Posted on March 12, 2014 at 15:58 29 Comments
Mar07

Interested in testing an RSS reader for science?

In: news • Tags: alert system, feed reader, HeadsUp, RSS

I can now announce the first closed beta testing phase of an RSS reader intended for scientists. So far, we have something like a Feedly clone with a few extras built in, such as collecting the most tweeted articles of the last 24h, some rudimentary ability to sort/filter either feeds or groups of feeds. It’s not a whole lot, yet, so keep your expectations low 🙂 We’re just getting started.

One of the goals of the project is to make this feed reader modular, such that each user can write their own sort/filter/discover algorithm to be plugged into the reader anywhere.

Another goal is to use social technology to allow for following the reading habits of scientists working in relevant fields, à la “readers who have read this article, have also read this one”.

The functionalities we’re thinking of go beyond simple keyword filtering/sorting, however. A long-term goal is to have the reader learn from what we click on, save or recommend and suggest relevant literature from that. For instance,one could think of a selection of feeds from topically highly relevant journals (sel1) and another selection of journals with possibly relevant journal feeds (sel2). The reader would learn from what you click, save and recommend on in sel1, to pick likely relevant content out of sel2 for you.

Again, at the core of the reader is an open architecture that allows the reader to grow with its user base. Scientists are a highly trained and analytical bunch with, collectively, more than enough expertise to come up with a modern information system, that picks the most relevant articles from the roughly 2 million papers published every year. The exponential growth and spectacular success of R is testament to this potential.

So, if you’re interested in contributing to this project by joining a group of about 25 closed beta testers, please email me at bjoern@brembs.net and I will send you instructions on how to join the test. Obviously, if you’d like to contribute by coding, by all means do also let me know!

Like this:

Like Loading...
Posted on March 7, 2014 at 14:17 46 Comments
Mar06

What is the difference between text, data and code?

In: science politics • Tags: code, data, open science, publishing, software

tl;dr: So far, I can’t see any principal difference between our three kinds of intellectual output: software, data and texts.

 

I admit I’m somewhat surprised that there appears to be a need to write this post in 2014. After all, this is not really the dawn of the digital age any more. Be that as it may, it is now March 6, 2014, six days since PLoS’s ‘revolutionary’ data sharing policy was revealed and only few people seem to observe the irony of avid social media participants pretending it’s still 1982. For the uninitiated, just skim Twitter’s #PLoSfail, read Edmund Hart’s post or see Terry McGlynn’s post for some examples. I’ll try to refrain from reiterating any arguments made there already.

Colloquially speaking, one could describe the scientific method somewhat shoddily as making an observation, making sense of the observation and presenting the outcomes to interested audiences in some version of language. Since the development of the scientific method somewhere between the 16th century and now, this is roughly  how science has progressed. Before the digital age,it was relatively difficult to let everybody who was interested participate in the observations. Today, this is much easier. It still varies tremendously between fields, obviously, but compared to before, it’s a world’s difference. Today, you could say that scientists collect data, evaluate the data and then present the result in a scientific paper.

Data collection can either be done by hand or more or less automatically. It may take years to sample wildlife in the rainforest and minutes to evaluate the data on a spreadsheet. It may take decades to develop a method and then seconds to collect the data. It may take a few hours to generate the data first by hand and then by automated processing, but decades before someone else comes up with the right way to analyze and evaluate the data. What all scientific processes today have in common is that at some point, the data becomes digitized, either by commercial software or by code written precisely or that purpose. Perhaps not in all, but surely in the vast majority of quantitative sciences, the data is then evaluated using either commercial or hand-coded software, be it for statistical testing, visualization, modeling or parameter/feature extraction, etc. Only after all this is done and understood does someone sit down and attempts to put the outcome of this process into words that scientists not involved in the work can make sense of.

Until about a quarter of a century ago, essentially all that was left of the scientific process above were some instruments used to make the observations and the text accounts of them. Ok, maybe some paper records and later photographs. With a delay of about 25 years, the scientific community is now slowly awakening to the realization that the digitization of science would actually allow us to preserve the scientific process much more comprehensively. Besides being a boon for historians, reviewers, committees investigating scientific misconduct or the critical public, preserving this process promises the added benefit of being able to reward not only those few whose marketing skills surpass the average enough to manage publishing their texts in glamorous magazines, but also those who excel at scientific coding or data collection. For the first time in human history, we may even have a shot at starting to think about developing software agents that can trawl data for testable hypotheses no human could ever come up with – proofs of concepts already exist. There is even the potential to alert colleagues to problems with their data, use the code for purposes the original author did not dream of or extract parameters from data the experimentalist had not the skill to do. In short, the advantages are too many to list and reach far beyond science itself.

Much like the after the previous hypothetical requirement of proofs for mathematical theorems, or the equally hypothetical requirement of statistics and sound methods, there is again resistance from the more conservative sections of the scientific community for largely the same 10 reasons, subsumed by: “it’s too much work and it’s against my own interests”.

I can sympathize with this general objection as making code and data available is more work and does facilitate scooping. However, the same can be said of publishing traditional texts: it is a lot of work that takes time away from experiments and opens all the flood gates of others making a career on the back of your work. Thus, any consequential proponent of “it’s too much work and it’s against my own interests” ought to resist text publications with just as much fervor as they resist publishing data and code. The same arguments apply.

Such principles aside, in practice, of course our general infrastructure makes it much too difficult to publish either text, data or software, which is so many of us now spend so much time and effort on publishing reform and why our lab in particular is developing ways to improve this infrastructure. But that, as well, does also not differ between software, data and science: our digital infrastructure is dysfunctional, period.

Neither does making your data and software available make you particularly more liable to scooping or exploitation than the publication of your texts does. The risks vary dramatically from field to field and from person to person and are impossible to quantify. Obviously, just as with text publications, data and code must be cited appropriately.

There may be instances where the person writing the code or collecting the data already knows what they want to do with the code/data next, but this will of course take time and someone less gifted with ideas may be on the hunt for an easy text publication. In such (rare?) cases, I think it would be a practical solution to implement a limited provision on the published data/code stating the precise nature of the planned research and the time-frame within which it must be concluded. Because of its digital nature, any violation of said provisions would be easily detectable. The careers of our junior colleagues need to be protected and any publishing policy on text, data or software ought to strive towards maximizing such protections without hazarding the entire scientific enterprise. Also here no difference between text,data and software.

Finally, one might make a reasonable case that the rewards are stacked disproportionately in favor of text publications, in particular with regard to publications in certain journals. However, it almost goes without saying that it is also unrealistic to expect tenure committees and grant evaluators to assess software and data contributions before anybody even is contributing and sharing data or code. Obviously, in order to be able to reward coders and experimentalists just as we reward the Glam hunters, we first need something to reward them for. That being said, in today’s antiquated system it is certainly a wise strategy to prioritize Glam publications over code and data publications – but without preventing change for the better in the process. This is obviously a chicken-and-egg situation which is not solved by the involved parties waiting for each other. Change needs to happen on both sides if any change is to happen.

To sum it up: our intellectual output today manifests itself in code, data and text. All three are complementary and contribute equally to science. All three expose our innermost thoughts and ideas to the public, all three make us vulnerable to exploitation. All three require diligence, time, work and frustration tolerance. All three constitute the fruits of our labor, often our most cherished outcome of passion and dedication. It is almost an insult to the coders and experimentalists out there that these fruits should remain locked up and decay any longer. At the very least, any opponent to code and data sharing ought to consequentially also oppose text publications for exactly the same reasons. We are already 25 years late to making our CVs contain code, data and text sections under the “original research” heading. I see no reason why we should be rewarding Glam-hunting marketers any longer.

UPDATE: I was just alerted to an important and relevant distinction between text, data and code: file extension. Herewith duly noted.

Like this:

Like Loading...
Posted on March 6, 2014 at 21:07 68 Comments
Feb14

How scientific are scientists, really?

In: science politics • Tags: citations, impact factor, statistics

In what area of scholarship are repeated replications of always the same experiment every time published and then received with surprise, only to immediately be completely ignored until the next study? Point in case from an area that ought to be relevant to almost every single scientist on the planet: research evaluation. The first graph I know to show the left-skewed distribution of citation data is from 1997:

Left-skewed citation data from PO Seglen BMJ 1997;314:497.1

PO Seglen, the author of above paper, concludes his analysis with the insight “the journal cannot in any way be taken as representative of the article”.

In our paper reviewing the evidence on journal rank, we count a total of six subsequent (and one prior, in 1992) publications that present the left-skewed nature of citation data in one way or another. In other words, this is an established and often-reproduced fact that citation data are left-skewed. This distribution of course entails that representing it by the arithmetic mean is a mistake that would make an undergraduate student fail their course. Adding to the already long list of replications is Nature Neuroscience, home of the most novel and surprising neuroscience with this ‘unexpected’ graph:
nn0803-783-F1

Only this time, the authors are not surprised, appropriately cite PO Seglen’s 1997 paper and acknowledge that of course this finding is nothing new: “reinforcing the point that a journal’s IF (an arithmetic mean) is almost useless as a predictor of the likely citations to any particular paper in that journal”. Kudos, Nature Neuroscience editors!

What puzzles me just as much as the authors and what prompted me to write this post is their last sentence:

Journal impact factors cannot be used to quantify the importance of individual papers or the credit due to their authors, and one of the minor mysteries of our time is why so many scientifically sophisticated people give so much credence to a procedure that is so obviously flawed.

In which other area of study does it take decades and countless replications before a basic fact is generally accepted? Could it be that we scientists, perhaps, are not as scientifically sophisticated as we’d like to see ourselves? Aren’t we, perhaps, equally dogmatic, lazy, stubborn and willfully ignorant as any other random person from the street? What does this persistent resistance to education say about the scientific community at large? Is this not an indictment of the gravest sort as to how the scientific community governs itself?

Like this:

Like Loading...
Posted on February 14, 2014 at 18:23 6 Comments
Feb13

Two evolutionary conserved, fundamental learning mechanisms

In: own data • Tags: behavior, classical, conditioning, evolution, neuroscience, operant, self-learning

At this year’s Winter Conference on Animal Learning and Behavior, I was invited to give the keynote presentation on the relationship between classical and operant conditioning. Using the slides below, I argued that Skinner already had identified a weakness in his paradigm as early as 1934, when he was discussing this relation in the scientific literature at the time. I went on to explain how neuroscientists using invertebrate model systems had since been able to overcome this weakness.

Drawing from evidence in the marine snail Aplysia and the fruit fly Drosophila, I detailed some of the physiological and biochemical mechanisms underlying learning in a ‘pure’ implementation of operant conditioning, without the confounding variable identified by Skinner more than seventy years before. These mechanisms reveal the network, physiological and biochemical differences between forms of learning that are concerned with events in the world around the organism (world-learning), and those that are concerned with events that originate in the organism itself (self-learning).

These two forms of learning, world- and self-learning constitute two fundamental, evolutionary conserved learning mechanisms among a growing inventory of processes involved in memory formation.

Pavlovian and Skinnerian Processes are Genetically Separable from Björn Brembs

Like this:

Like Loading...
Posted on February 13, 2014 at 00:39 Comments Off on Two evolutionary conserved, fundamental learning mechanisms
  • Page 15 of 22
  • « First
  • «
  • 13
  • 14
  • 15
  • 16
  • 17
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,084 views)
  • Sci-Hub as necessary, effective civil disobedience (23,038 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,918 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d