bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 111 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 409 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 680 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1603 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1557 downloads 0.00 KB
Download
Sep03

The cost of knowledge – in 2003?

In: blogarchives • Tags: englund, impact factor, journal rank, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from March 29, 2012:

My former supervisor (in 1993!) Göran Englund may not be a Field’s Medalist (he’s an ecologist!), but already in 2003, he saw corporate publishers behaving in the same way which gave rise to the Elsevier boycott this year, almost ten years later: extorting university libraries with overpriced journals. Back then, he calculated a “blacklist” of journals, ranked by subscription price per article in his field of ecology. Interestingly, the bottom of this list is populated by the high-ranking society and non-profit journals, while the expensive spots are occupied by the lower-ranking, overpriced journals of corporate publishers. Unfortunately, he never published his analysis, but after a phone-conversation initiated for an entirely different reason today, he sent me his blacklist. Here’s what he said about it nine years ago:

The crisis in academic publishing

  • The market is dysfunctional – there is no mechanism regulating journal prices.
  • Prices of commercially published journals often increase by 10-20% per year
  • In ecology the average prices of commercially published journals are four times higher than those published by non-profit organizations.
  • Libraries cancel subscriptions – Our research is not efficiently disseminated.
  • We pay more and get less.

What can be done?

  • Examine the pricing policy of any commercially published journal before you contribute as an author, reviewer, or editor. If possible, refuse to do business with publishers who practice “predatory pricing.”
  • Submit papers to journals that have reasonable prices.
  • As a member of a scholarly association, encourage the creation of competitors to expensive commercial journals.
  • Inform your colleagues.

More information on the crisis at: www.createchange.org.

The document also contains one small figure at the end that I thought I should paste in here:

eco_price.png

I’ve converted the entire document to PDF for everyone to enjoy. It almost goes without saying: after 2003, Göran never published, reviewed or edited for any of the commercial journals any more.

Like this:

Like Loading...
Posted on September 3, 2013 at 16:45 Comments Off on The cost of knowledge – in 2003?
Sep03

How to recharge in Sweden

In: personal • Tags: blueberries, flyfishing, lake, mushrooms, river, sweden, trout

Since the birth of our daughter, we spend our summers in Sweden, just like I used to, when I was little. This not only allows her to get more practice in Swedish then she gets talking with just me or her grandmother, but we also get to really recharge our batteries in the fabulous wilderness up there. Here are a few of the components needed for such an efficient charger:

For instance, you can just step out and collect some of these

2013-08-19 17.00.57

Until you have enough of them, maybe like this

2013-08-21 17.19.22

Then you get all the leaves and stuff out

2013-08-21 18.12.00

and put them all into a box

2013-08-22 08.37.03

Then, every morning, you can put some of those into your Swedish filmjölk before you add your cereal:

2013-08-22 08.44.34

Or, of course, you can bake a blueberry pie (which was eaten before I could take a picture of it).

As the wilderness begins right at your doorstep (at least where we were), you can take a few steps and discover some of these

2013-08-10 19.07.16

and then fry them right away:

2013-08-17 22.34.59

Another thing you can do is go hiking to beautiful places such as these

2013-08-23 17.40.21

2013-08-23 17.21.16

2013-08-20 16.09.24

2013-08-16 17.34.03

Or you can take a boat and enjoy some of the many lakes there

2013-08-16 09.32.26

As you can see, there’s plenty of water, so you can do some behavioral experiments using mockups mimicking insects and prey-fish and perhaps trigger the predatory response in some of these

2013-08-22 18.10.50

which one can easily turn into this

2013-08-22 11.58.50

or some larger specimens such as this one

2013-08-18 16.02.50

which can quickly evolve in to these different forms

2013-08-19 17.47.10

2013-08-19 18.32.29

2013-08-19 19.04.04

I hope you can image how sad it is to leave this place, but also how rested and recharged we all feel now.

Like this:

Like Loading...
Posted on September 3, 2013 at 10:16 1 Comment
Sep01

Unquestioning dogma: the gatekeepers of science

In: blogarchives • Tags: GlamMagz, publishing, reputation

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from December 6, 2010:

This morning my friend Ramy reminded us of the recent spats over two PLoS One publications (Darwinius, Red Sea) and how they were used to question the ‘reputation’ of PLoS One as a journal. Of course, it is about as meaningful to talk about the reputation of a journal as it is to talk about the reputation of the cover of a book. Journals are containers which say very little about their content. But on to the really relevant point:

Specifically, Ramy pointed out how the current spat about a publication in the journal Science on a purportedly arsenic-based lifeform (see, e.g., Pharyngula and especially Rosie Redfield) didn’t reflect on Science at all, despite the basically identical story-line of media hype before publication followed by more sober commentary from the scientific community after publication. Why is PLoS One criticized in the first two cases, but nobody questions Science in this (or the numerous other) cases? Clearly, the two GlamMagz Nature and Science both have their share of in some cases pretty embarrassing blunders. My personal favorite is a paper in Nature about fly thermosensation, easily the worst conducted study in this field in quite a few years. Yet, nobody questions the ‘reputation’ of Nature. Also in this case, none of the critical commenters questions the legitimacy of the gatekeeper function that the GlamMagz are so happy to tout.

Let’s be honest about it: there’s no journal without fault. Everyone makes mistakes. Journals are no more gatekeepers than the persons working there. Any perceived hierarchy among journals is merely that: perceived. A perception caused by visibility, historical baggage, group-think and circular reasoning.

It doesn’t matter where something is published – what matters is what is being published. Given the obscene subscription rates some of these journals charge, if anything, they should be held to a higher standard and their ‘reputation’ (i.e., their justification for charging these outrageous subscription fees!) being constantly questioned, rather than this unquestioning dogma that anything published there must be relevant, because it was published there. If anything, every single contested paper should be used to question the level of subscription fees raised by these journals.

In fact, every retraction should lead to an immediate reduction in subscription fees for the journal in which the retracted paper was published, because the journal failed to serve its gatekeeper purpose. If the journals justify, as they do, their obscene subscription extortion with their outstanding peer-review process, their price needs to drop every time that process fails. Given the hyper-inflation of retractions, we should see a precipitous drop in subscription charges immediately, should such a policy be enforced.

UPDATE: Almost simultaneously, pretty much along the same lines is Egon’s post on trust in science.


Wolfe-Simon, F., Blum, J., Kulp, T., Gordon, G., Hoeft, S., Pett-Ridge, J., Stolz, J., Webb, S., Weber, P., Davies, P., Anbar, A., & Oremland, R. (2010). A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus Science DOI: 10.1126/science.1197258

Like this:

Like Loading...
Posted on September 1, 2013 at 19:39 2 Comments
Aug30

The danger of universal gold open access

In: blogarchives • Tags: libraries, open access, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from November 7, 2012:

As a strong supporter of any open access initiative over the last almost ten years, there is now a looming threat that the situation may deteriorate beyond the abysmal state scholarly publishing is in right now.

Yes, you read that right: it can get worse than it is today.

What would be worse? Universal gold open access – that is, every publisher charges the authors what they want for making the articles publicly accessible. I’ve been privately warning of this danger for some time, and now an email and a blog post by Ross Mounce reminded me that it is about time to make my lingering fear a little more public. He wrote:

Outrageous press release from Nature Publishing Group today.

They’re explicitly charging more to authors who want CC BY Gold OA, relative to more restrictive licenses such as CC BY-NC-SA. Here’s my quick take on it:https://rossmounce.co.uk/2012/11/07/gold-oa-pricewatch

More money, for absolutely no extra work.

How is that different from what these publishers have been doing all these years and still are doing today?

What is so surprising about charging for nothing? That’s been the modus operandi of publishers since the advent of the internet.

Why should NPG not charge, say, 20k USD for an OA article in Nature, if they chose to do so?

If people are willing to pay more than 230k ($58,600 a year) for a Yale degree or over 250k ($62,772 a year) just to have “Harvard” on their diplomas, why wouldn’t they be willing to shell out a meager 20k for a paper that might give them tenure? That’s just a drop in the bucket, pocket cash.

I’d even be willing to bet that the hard limit for gold OA luxury segment publishing will be closer to 50k or even higher as multiple authors can share the cost. Without regulation, publishers can charge whatever the market is willing and able to pay. If a Nature paper is required, people will pay what it takes.

If libraries let themselves be extorted by publishers out of fear they’ll get yelled at by their faculty, surely scientists will let themselves get extorted by publishers out of fear they won’t be able to put food on the table nor pay the rent without the next grant/position.

Who seriously believes that only because they now make some articles OA, publishers would all of a sudden become non-profit organisations?

I don’t see anything extraordinary in this press release at all, completely normal and very much expected. In fact, the price difference is actually quite small.

I really have no idea what’s supposed to be so outrageous about this?

Obviously, the alternative to gold OA cannot be a subscription model. I’ve written repeatedly that I believe a rational solution would be to have libraries archive and make accessible the fruits of our labor: publications, data and software. There can be a thriving marketplace of services around these academic crown jewels, but the booty stays in-house.

At the very least, if there ever should be universal gold OA, the market needs to be heavily regulated with drastic price caps below current author processing charges, or the situation will be worse than today: today, you have to cozy up with professional editors to get published in ‘luxury segment’ journals. In a universal OA world, you would also have to be rich. This may be better for the public in the short term, a they then would at least be able to access all the research. In the long term, however, if science suffers, so will eventually the public.

Every market I know has a luxury segment. I’ll gladly rest my fears if someone shows me a market without such a segment and how it is similar to a universal OA academic publishing market. Until then, I’ll be working towards getting rid of publishers and journal rank.

Like this:

Like Loading...
Posted on August 30, 2013 at 19:37 6 Comments
Aug29

Flashback: Programming Free Will – creative robots

In: blogarchives • Tags: Briegel, free will, robots, spontaneity

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from September 13, 2011:

I wasn’t planning to comment on Kerri Smith’s piece on Free Will (probably paywalled) in the last issue of Nature magazine. However, this morning I read a paper on Free Will in robots (or rather ‘agents’), which urged me to suggest some updates to the sadly (otherwise Ms. Smith is producing outstanding work, especially her podcasts!) outdated discussion in the Nature article.

Her article starts out with a modern variation of Libet’s famous experiments. These experiments can be caricatured like this: “press a button whenever you feel like it and watch a clock while you’re making the decision to tell us when you think you’ve made the decision”. It is then little surprise that some form of brain activity (either electrical, in the case of Libet or blood flow, in the case of the modern fMRI studies) can be recorded before the time point when the study participants self-reportedly made the decision.

Detailed treatment of these experiments isn’t really needed here, as any biologist realizes that all our thoughts are indeed based on brain activity and thus any conscious act or thought must be either simultaneous to or preceded by nervous activity. The amount of this time difference may vary with the task and the method of activity measurement. The fMRI brain scans allowed researchers to predict a dual choice to 60%, i.e., just above chance level. Clearly, even with modern brain scans a brain isn’t even close to a system one might call ‘deterministic’ by any stretch of the word.

From the way I read the article, the most important point drawn by the researchers is that the thought process itself is based on brain activity. John Dylan-Haynes:

“I’ll be very honest, I find it very difficult to deal with this,” he says. “How can I call a will ‘mine’ if I don’t even know when it occurred and what it has decided to do?”

I’d counter that question with another question: what else then brain activity would you have expected when you peered into a brain? Dualism has been dead since Popper’s and Eccles’ “The Self and its Brain” in 1977. Why is this article still beating a dead horse?

About half way through the article, this exact issue is raised:

The trouble is, most current philosophers don’t think about free will like that, says Mele. Many are materialists — believing that everything has a physical basis, and decisions and actions come from brain activity. So scientists are weighing in on a notion that philosophers consider irrelevant.

Precisely! And yet, towards the end of the article, the dualism creeps back in, by the same philosopher who so rightly dismissed it:

Philosophers are willing to admit that neuroscience could one day trouble the concept of free will. Imagine a situation (philosophers like to do this) in which researchers could always predict what someone would decide from their brain activity, before the subject became aware of their decision. “If that turned out to be true, that would be a threat to free will,” says Mele.

Even if this prediction were possible, any decision would still be ours, as it would still not be possible to predict the decision from the time when the decision-task was initiated. In other words: one would need to observe the decision-making process for some time in order to eventually project where it is going to end up. I think it is very likely that we will be able to go rather far with this approach, but because our brain is still calling the shots, this has absolutely no relevance for the question on how free the decision was. We are not slaves of our brains, we are our brains. And this means an upgrade for our understanding of human nature, or you are vastly underestimating the abilities of brains.

But enough of the disappointing aspects of this article. I was reminded of it because of a very exciting article by a physicist in Austria, Hans J. Briegel: “On machine creativity and the notion of free will“. It displayed a modern understanding of the scientific issues surrounding a materialistic (i.e., scientific) notion of free will and provided a proof of principle of how Free Will may be implemented in physical objects. And these objects don’t even have to be biological in origin! As Briegel writes:

To put it provocatively, even if human freedom were to be an illusion, humans would still be able, in principle, to build free robots. Amusing.

Amusing indeed! The paper by Briegel elaborates on a method to provide software agents with a degree of freedom without breaking any laws of nature, a method he calls ‘projective simulation‘.

Briegel claims that Free Will by projective simulation, could, “in principle, be realized with present-day technology in form of […] robots.” Projective simulation means that the robots have a flexible sort of memory that allows the agent to simulate situations that are similar, but not identical to, events that it has encountered before. There are rules according to which these ‘projections’ can be generated by the robot, so they’re not arbitrary, but they contain a degree of randomness (or ‘spontaneity’) that allows them to “increasingly detach themselves from a strict causal embedding into the surrounding world”. Briegel realizes that, in biological systems, much of the required random variability is readily available, but because we don’t know how it is being used, we cannot say much about the relevance of it. In fact, with reference to Quantum Indeterminacy, he arrives at almost the same wording as I did in my Proc. Roy. Soc Article:

We may not need quantum mechanics to understand the principles of projective simulation, but we have it. And this is our safeguard that ensures true indeterminism on a molecular level, which is amplified to random noise on a higher level. Quantum randomness is truly irreducible and provides the seed for genuine spontaneity.

It is gratifying to see how close we came of each other, without knowing of each other. Here is my way of putting it:

Because of this nonlinearity, it does not matter (and it is currently unknown) if the ‘tiny disturbances’ are objectively random as in quantum randomness or if they can be attributed to system, or thermal noise. What can be said is that principled, quantum randomness is always some part of the phenomenon, whether it is necessary or not, simply because quantum fluctuations do occur. Other than that it must be a non-zero contribution, there is currently insufficient data to quantify the contribution of such quantum randomness. In effect, such nonlinearity may be imagined as an amplification system in the brain that can either increase or decrease the variability in behaviour by exploiting small, random fluctuations as a source for generating large-scale variability.

If this topic is of any interest to you, you really ought to read Briegel’s paper!

Basically, the discussion about freedom today has progressed beyond the question of whether it exists (the dualistic notion, everyone agrees, does not), but how it has been implemented in a material world that is powerful and creative enough to not need any supernatural forces. It is sad that this was only briefly touched upon in the Nature piece, when it should have been the very core of the article.


Smith, K. (2011). Neuroscience vs philosophy: Taking aim at free will Nature, 477 (7362), 23-25 DOI: 10.1038/477023a

Like this:

Like Loading...
Posted on August 29, 2013 at 17:06 Comments Off on Flashback: Programming Free Will – creative robots
Aug28

Libraries are better than corporate publishers because…

In: blogarchives • Tags: libraries, open access, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from February 17, 2012:

In the wake of the Elsevier boycott, some are asking if we should change our publication system and if so, how. In search for an answer to this question, I’ve been thinking about the best place to keep the works of academics safe and publicly accessible in a sustainable, long-term solution. Now what was historically the place where the work of scholars could traditionally be found? The libraries of course! However, given how entrenched some researchers are in the current way of disseminating research and given the opposition an international US$10 billion industry can muster, this solution to parasitic publishing practices is not obvious for everyone. So here I’m collecting the most obvious advantages of a library-based scholarly communication system for semantically linked literature and data.

  • Given a roughly 40% profit margin in academic publishing, give or take (source), the approximately US$10 billion currently being spent can be divided in 6b cost and 4b profits. If most of these funds are being spent by libraries for subscriptions, all else remaining equal, libraries stand to save about 4 billion each year if they could do the job of publishers. According to some estimates, however, costs could be dramatically lowered and thus libraries stand to save significantly more than the 4b annually. Thus, there is ample incentive for libraries and thus the tax-paying public, to cut the middleman out.
  • Libraries are already debating what their future in a digital world could be. Archiving and making the work of their faculty accessible and re-usable seems like a very satisfying purpose for anyone, especially libraries discussing about how to redefine their existence.
  • All the content would remain under the control of the researchers and not in the hands of private entities with diverging interests from both that of the research community and the general public.
  • The funding streams for such a combined data and literature archive would be much more stable and predictable than current models, especially for long-term, sustainable database maintenance. Each university pays only actual costs in proportion to the contributions of their faculty.
  • Libraries already have much of the competence to store and link data together. There are many projects on this technology (see e.g. LODUM) and plenty of research funds are being spent to further develop these technologies. Research is already going on to develop the infrastructure and tools to handle primary research data and literature at university libraries. Thus, there is not a lot that libraries really need to learn in order to do what publishers do – in fact, most libraries are already doing just that.
  • Essentially, libraries are already publishers of, e.g. all the theses of their faculty or historical texts, etc. and some insitutions even have their own university/library presses. Open access to all of these digital media and many more is being organized by, e.g. National Libraries or Networked Repositories for Digital Open Access Publications. Adding scholarly articles to all that thus doesn’t really constitute a huge shift for libraries in practical terms.
  • Researchers would have full control over the single search interface needed to most efficiently find the information we are looking for, instead of being stuck with several tools, each of which lacking in either coverage and features.
  • It would be possible not only to implement modern semantic web technology, but also very basic functionality such as hyperlinks: clicking on “the experiments were performed as previously described” would actually take you directly to the detailed methods for the experiment.
  • No more repeated reformatting when re-submitting your work.
  • There still would be plenty of opportunities for commercial businesses in this sector. All the services libraries could not or would not want to do themselves can still be outsourced in a service-based, competitive commercial environment – this would not impact the usability of the scholarly corpus as in the current status quo.
  • Everyone who prefers to look up numbers rather than reading the actual papers can still do that – only that the numbers provided by the new system would have an actual scientific basis. Every kind of metric we can think of is just a few lines of code away if we have full control over the format in which references to our work are handled. This allows not only assessment of articles and data, but also to create a reputation system that can be customized by the individual user to the task at hand, be it tracking topics, ranking scientists or comparing institutions.
  • Users can still choose to pay ex-scientists to select what they think is the best science – after publication. In fact, then their services would actually be in competition with one another and any user’s choice wouldn’t affect others who do not share that user’s opinion on the service or their research interests.
  • We easily have the funds to develop a smart tool that suggests only relevant new research papers and learns from what you and others read (and don’t read). This tool is highly customizable and suggests only the research you are interested in – and not that of other people who you haven’t chosen to do that task for you.
  • The approximately 1.5 million publications still need to be published. This means few jobs would be lost and plenty of recourse would be available after a rejection. In brief, the beneficial aspects of the heterogeneity of the status quo can be conserved.

Those are only the benefits that immediately come to mind as the most important ones. Thus, as I see it, transitioning from a corporate publisher-based to a library-based system is both practically feasible in the mid-term, would eliminate many negative factors of the current system while conserving any positive values it might have had and providing many new benefits not or difficult to obtain under the current status quo. Thus, the incentives for libraries, the public and science in general are obvious. However, the incentives for individual scientists for such a transition remain small as long as journal rank determines careers. Hence, a critical factor for such a transition is to abandon journal rank in favor of more accurate metrics. I have already presented a number of publications in which the detrimental effect of journal rank has been described and I’m working with a colleague on a review paper covering all of these comprehensively.

Clearly, there will be a debate of how to best transition from the current system to a new one. In order to minimize access issues, I propose that a small set of competent and motivated libraries with large subscription budgets and substantial faculty support cooperate in taking the lead. This group of libraries would shift funds from subscriptions to investing in developing infrastructure and other components for a library-based scholarly communication system. If, say, only ten libraries cut subscriptions on the order of their ten most expensive journals, there would be more than a million Euros available every single year with little or now disruption in access. Some of the freed funds could be used to assist affected faculty in open access publication or inter-library loan of the needed articles. In this way, within a few years, the entire currently available scholarly literature could be made accessible from a single interface using tools vastly superior to the current ones (which is technically rather easy, given the low quality of what we currently have to deal with). The combination of superior access to the literature and a reduction in the requirement to publish in high-ranking journals, should provide sufficient incentives also for young researchers to support the transition.

P.S.: An often mentioned hurdle concerns the back-issues archives of the corporate publishers. Faced with a dwindling customer-base without much prospect for future involvement in scholarly communication, these companies should have little quarrels with a single fee to make all their archives accessible to libraries for transfer into the common database. Already in the current climate, extortion by holding the archives hostage, is not an option. The more we wean ourselves from these corporations, the less support they will have.

UPDATE: Heather Morrison has chimed in with one of her excellent calculations. It is high time more people are thinking along those lines, in order to develop a strategy of how to best transition towards a modern scholarly communication system.

Like this:

Like Loading...
Posted on August 28, 2013 at 16:34 Comments Off on Libraries are better than corporate publishers because…
Aug27

Flashback: The brain creates something out of nothing

In: blogarchives • Tags: creativity, hallucination, information, spontaneity, variability

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from October 25, 2010:

Brains are what mathematicians call “information sources“. At least this is one of the results of a set of elaborate experiments together with sophisticated analyses and computations reported in the current issue of Nature Neuroscience (subscription required). The article, entitled “Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content”, studies a set of neurons in the brain’s main olfactory center, the olfactory bulb. These neurons, mitral cells, receive input from olfactory sensory neurons with their receptors in the olfactory epithelium in the nose. The authors looked at particular subsets of these neurons in the mouse brain, namely the ones that receive their input within a single identical so-called ‘glomerulus’, a neuropil structure characterized by the fact that it contains only the terminals from olfactory receptor neurons that express the same olfactory receptor. This means that the input these mitral cells receive is highly correlated. In other words, these mitral cells receive virtually identical stimulation and the authors looked at the output these neurons generate in response to the input.

In order to do this, the used patch electrodes to stimulate and record from these neurons. In order to provide realistic input scenarios, they passed more or less random amounts of current into one neuron at a time and then evaluated their output. What they found was that each neuron seemed to look at a different aspect of the input, leading to this population of cells to lose the correlation in their activity that is present in their input. Put differently, every neuron reacts differently to the same input and even if you give all mitral cells in the same glomerulus the same input (like the same smell), each mitral cell will respond with a different train of activity degrading any correlation that might have been present in the input.

This all make a lot of sense, because in this way, each neuron looks at the same stimulus from a slightly different perspective, enhancing the amount of information the animal can get from a single stimulus. From the point of view of information coding, this is an advantage, that comes with a disadvantage: some neurons will create ‘phantom’ information, information that isn’t there. The authors do not mention this with a single word, but they show it in their first figure:

urban_2010.png

In a) you can see the olfactory receptor neurons projecting into a single glomerulus and the mitral cells which receive their input. In b) and c) you see examples of mitral cells and their recordings to a single, depolarizing constant current. While one neuron, b), does what one would expect with constant input – constant output, i.e., firing – the other neuron, c), shows irregular, bursting activity that doesn’t reveal the constancy in the input signal at all. To put it drastically, one could say that this neuron is ‘hallucinating’. This example shows very clearly the drawback, in terms of information coding, of neurons decorrelating correlated input: some of the information the transmit may not be present in the sensory input at all. Already at this very early stage, the very first synapse after the sensory neurons, the brain is already altering, expanding and interpreting the sensory signal. In fact, it’s already making up its own information that is independent from the outer world. This is profound!

I’ve been thinking about the conundrum of ‘noise in the brain‘ before and already before this publication now, it has been very suggestive to argue that the variability in neural activity is not just random, pernicious noise but has some functional significance – a significance which we don’t quite understand, yet. The authors here argue that one part of that significance is to capture all the information in a sensory stimulus. I’d agree with that but from their experiments I’ll take another function which they don’t mention: the capability to react slightly differently to always the same stimulus. This capability of doing things differently every time the same situation comes along is vital for survival. Here’s a great example of what can happen to you, if you don’t do that and react always with the same response to the same stimulus:

Tentacled snakes hunt by exploiting the predictability of fish: they elicit a so-called C-start response in their prey in order to predict where the fish will be going, only to then intercept the fish’s escape path. The result: the fish swims right into the mouth of the snake. This video was published by Ken Catania, who studies these animals.

The results by Padmanabhan and Urban provide further evidence that the highly variable activity of neurons is not ‘noise’ in a complex system, but actively generated by the brain not only to increase information capacity, but also to behave unpredictably, creatively and spontaneously in an unpredictable, dangerous and competitive world.

It also means that adding information to a sensory stimulus may be a disadvantage in terms of information coding, but it wasn’t eliminated by evolution because it prevented animals from becoming too predictable – a classic cost/benefit trade-off. All this happens already at the very first synapse after sensory transduction – how much information is added in the more central stages of computing in the brain? Methinks, the question is not “why do some people hallucinate?”, the question should be, “why don’t we all?”…


Padmanabhan, K., & Urban, N. (2010). Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content Nature Neuroscience, 13 (10), 1276-1282 DOI: 10.1038/nn.2630

Like this:

Like Loading...
Posted on August 27, 2013 at 17:01 Comments Off on Flashback: The brain creates something out of nothing
Aug26

A fistful of dollars: why corporate publishers have no place in scholarly communication

In: blogarchives • Tags: Elsevier, open access, publishing, RWA, SOPA

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from January 10, 2012:

With roughly four billion US$ in profit every year, the corporate scholarly publishing industry is a lucrative business. One of the largest of these publishers is Anglo-Dutch Elsevier, part of Reed Elsevier. According to their website, their mission is to

publish trusted, leading-edge Scientific, Technical and Medical (STM) information – pushing the frontiers and fueling a continuous cycle of exploration, discovery and application.

However, Elsevier recently admitted to publishing a set of six fake journals, aimed to promote medical products and drugs by the company Merck, but with the appearance of peer-reviewed, scholarly literature. Clearly, trust is not Elsevier’s top priority.
What is Elsevier’s top priority, though, is making money. Like all scholarly publishers, Elsevier is thriving, despite global financial and economic crises in recent years:

elsevier_profit.png
How can a private company be so isolated from the general economy? The reason is that they’re largely funded by the tax payer and so far education and R&D budgets have been relatively spared. How do commercial publishers do their job? Here is a brief sketch of how a scholarly paper comes about:

  • Researchers generate data
  • Researchers write manuscript
  • Publisher’s editor sends manuscript to other researchers who peer-review the manuscript at no cost to the publisher
  • Researchers modify the manuscript
  • Researchers pay page charges
  • Publisher copy-edits manuscript and puts it online.
  • Library or researcher pays subscription fees to access article

Thus, according to their website, 7,000 paid journal editors are working for the approx. 2,000 journals of Elsevier, while 970,000 unpaid board members, reviewers and authors, largely funded by the tax payer, are donating their time, brains and other valuable resources to the corporation. With hardly any labor costs to speak of and great value provided from outside for free by tax-funded researchers, it is not surprising hat corporate publishers sport great profit margins:

Publisher Revenue Profit Margin
Elsevier £2b £724m 36%
Springer’s Science + Business Media €866m €294m 33.9%
John Wiley & Sons $253 $106 42%
Informa.plc £145 £47 32.4%

On top of very small up front costs publishers increase subscription prices manifold beyond inflation:
Quite obviously, with low costs and an ever increasing stream of tax funds burning holes in your pocket, you wonder how all the money could be invested to protect your shareholder value for the future. Therefore, commercial publishers:

  • Buy access to elected representatives
  • Use this access to lobby for protective legislation
  • Pay full-time employees for government lobbying
  • Support SOPA
  • Discredit Open Access by hiring professional ‘pit-bull’ campaigners
  • Lobby against Open Access at the US White House

Comparing commercial with non-profit publishers shows how competition doesn’t bring the price down in scholarly communication: non-profit publishers are providing a publishing service that is consistently half or less of what commercial publishers provide (see also here). Therefore, I am very skeptical of suggestions to transform the scholarly communication ecosystem into a service business. Given that commercial publishers have a proven track record of untrustworthiness, price-gouging and political interference, what would keep them from increasing their prices for the services they provide, just as they have increased subscription prices?
No, the evidence is very clear, we need to rid the scholarly communication system from commercial publishers if we want to reduce or at least limit the burden on tax-payers:

  • Just eliminating profits will cut publishing costs by approx. US$4b annually
  • 1.5 million papers will have to be published anyway, so no jobs will be lost, just transferred
  • per-publication costs will come down further as scholarly communication becomes completely online.

Any proponent of for-profit scholarly publishing first needs to explain why future commercial publishers’ behavior should dramatically change from current and past behavior, before any other arguments will even be considered.

Like this:

Like Loading...
Posted on August 26, 2013 at 16:30 Comments Off on A fistful of dollars: why corporate publishers have no place in scholarly communication
Aug23

Scientists as social parasites?

In: blogarchives • Tags: career, contracts, unemployment

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from June 20, 2011:

In a recent report on German public TV, a sad but tragically very commonplace procedure is being described as the latest, most extreme case of worker exploitation: scientists continuing to do science while being officially unemployed. scared.png

On the face of it, the report seems to be correct in claiming that people who are working despite being paid unemployment benefits are incriminating themselves – which is what people colloquially call social parasites. However, the idea of paying these benefits is to enable people to look for jobs, maybe even to go and learn a new, more promising trade. In short, to give the candidate the leeway to do everything possible to secure a new job as soon as possible. Now what is the best way to secure a job in science? To do more science, get more papers, get more teaching experience, etc.: whatever pads your CV is what will enhance your chances to get the next job in science.

Moreover, scientists are so highly specialized, unemployment agencies are not able to find adequate employment for them. This means that German unemployment offices who have experience with unemployed scientists are actually complicit: they will specifically allow you to keep working as a scientist while they pay you, because they know that this is the best thing in this situation. In fact, they realize that it would be counterproductive to force you to stop doing science. If you are a scientist in Germany and unemployed (pretty common species), go to your local unemployment agency and ask them if they’ll let you continue working and they’ll say yes, I promise. That’s how normal this situation has become, with 65% of all jobs in German science being on short-term contracts.

This doesn’t mean that it’s a good thing that the money for science in Germany comes partly from the social budget and not exclusively from the science and education budget. Far from it. It is despicable that in some departments, the majority of PhD students will write their dissertation on unemployment benefits. If it weren’t so sad, it would be laughable that so many of the ‘excellent elite’ PhD students who were able to secure a scholarship for their PhD, will not even get unemployment benefits, but have to finish their degrees on welfare. There’s nothing positive about scores of postdocs finishing their last experiments and writing their papers while simultaneously writing the grant for the next projects and being paid from the wrong government branch. But it is neither criminal nor some exploitation perpetrated by unscrupulous professors in some isolated departments at German universities. It’s just the deplorable standard situation in a totally messed up scientific career system and it’s not even the worst part, not by a long shot.

And here is another reason why this practice will be difficult to change: the German taxpayer saves a lot of money. Think of this one scientist in the report who worked on short-term contracts for 17 years. For four years he was on benefits, which in Germany is 60% of your last salary. This means that in this time, the German taxpayer, who would have been paying him a full salary, only paid 60% for the same service and part of that is even paid for by the scientist himself in the form of his unemployment insurance. That’s some major savings right there.

What would the rational reaction of German politicians be to this report? Certainly not to try and prevent this gigantic money saver! More likely, one would expect them to demand to cut the salaries of scientists by 40% because 60% seems to be sufficient to keep them going. devilmad.png

Like this:

Like Loading...
Posted on August 23, 2013 at 16:25 1 Comment
Aug21

Flashback: All brains possess free will because there is no design in biology

In: blogarchives • Tags: evolution, free will, spontaneity

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from June 9, 2010:

I have no idea when it started, but probably long before Darwin, the notion of ‘design’ kept creeping into descriptions of biological organisms or traits: Birds are designed to fly or the eye is designed to see. I’ve also been guilty of using the word ‘design’ for biological objects every now and then. However, after reading a fair bit and after thinking for some time now, I’ve come to the conclusion that the word ‘design’ is so misleading and not even wrong, that it should never be used in biology at all, ever.

I’m only partially saying this because of a prominent creationist movement in the US awfully dubbed ‘intelligent design’ (which couldn’t be further from biological reality). My main reason is that the use of ‘design’ is pernicious for biological research. ‘Design’ or engineering approaches have been used over and over in biology and many if not most of them have gone the way of Newtonian mechanics: extremely useful and successful initially (and to some extent still today), but scientifically falsified eventually. Take a rather recent, prominent example: Lee Hood‘s automobile paradigm, according to which systems biology is akin to finding car parts without a manual in the fog and then trying to assemble the car. Current research shows that genetic networks are, in contrast to car components, highly plastic and forgiving of errors (a phenomenon termed robustness, often due to degeneracy in evolved systems). Even though the car analogy might seem daunting, it is still grossly oversimplified and biologically misleading. Indeed, expecting genetic networks to behave as stably as car components will lead to the wrong experiments and the wrong conclusions. Nevertheless, systems biology was and still is a very successful branch of biology. I’m not a systems biologist, but I would be surprised if many systems biologists still bought into the car analogy these days any more, after what they have found out until today.

For the same reasons, there is no blind watchmaker, because there is no watch. The analogy may still be useful in a rule-of-thumb kind of way, but biologically it is completely false.

Another, much older engineering approach is that of brains as input-output systems in neuroscience (the ‘sensorimotor hypothesis’), which purports that brains passively wait for stimuli to occur in order to respond to them, much like radios, computers or other equipment that we have designed. In some fields of neuroscience (and psychology) this approach has been so pervasive that any behavior is referred to as a response, assuming that there always must be an underlying stimulus triggering the behavior. It is not only recent ecological and ethological research on predators specializing in exploiting stereotypic behaviors in prey species which shows that being responsive is not an evolutionary stable strategy: responding reliably to the same stimuli in the same predictable way will neither get you to the unexpected food patch nor prevent a predator or competitor from predicting your next move. “Nature red in tooth and claw” will make sure that any predictable species will not last for long. There is a very good ultimate cause for the Harvard Law of Animal Behavior: “Under carefully controlled experimental circumstances, an animal will behave as it damned well pleases”: species which didn’t obey the law have not survived. It thus appears that for the last 100 odd years, neuroscience has been studying the exceptions to the rule that brains are always active and are constantly producing output, on which sensory stimuli merely exert a modest, modulatory role. I think it is fair to assume that one reason for this research direction (apart from the relative experimental ease) is that there is no mechanism or object we have designed that works in such a way. Everything we have made responds to commands and so people thought this is how brains operate.

Given the success of this approach in biology and of Newtonian mechanics in physics, it is no surprise that some thinkers have come to see brains as deterministic Newtonian clockworks. Complicated, maybe, but deterministic and predictable none the less. Any observed behavioral variability was shrugged off as random noise, when indeed it was the one brain function that kept animals in the run for the next generation. It is high time that neuroscientists realize that Newtonian mechanics are as falsified in biology as they are in physics: the world is not deterministic and neither are brains. The fact that brains are not engineered, but evolved allows for freedom of choice without quasi-magical quantum computing in the brain. All the brain requires to be unpredictable is some source of variability from which it can generate spontaneity, and there is plenty of such variability in neurons and their components, with or without quantum effects. The selection pressure of predictable animals being outcompeted, eaten or left without a mate established early on that every brain is equipped with a function which allows for adaptive behavioral choice. In some animals, there was need for more of such capabilities. Those animals seem to have more freedom of choice. Other animals could get away with being more predictable, giving the impression of being less ‘free’ and more robot-like. I’m starting to get the impression that the model systems used in neuroscience are predominantly of the latter sort. smoking.png

Be that as it may, all animals possess this trait to a larger or lesser degree and have been using it for survival and procreation since the very first brain evolved. I think it is time biology sheds the last remnants of classical thinking and starts to study ‘free will’ as the biological trait it is: the ability to behave differently in the same situations, the ability to chose from identical options, the mental coin toss. The centuries of philosophical thinking on this topic have provided us with a wonderful framework within which these empiricial findings can be embedded. Because brains are output-input systems (rather than the other way around), our lab has started to study the spontaneous choices of the fruit fly Drosophila and how the brain generates them. Currently, a graduate student in our lab is studying where in the brain these processes take place in order to later be able to understand how the circuits mediating these choice function. It is a testament to the idiocy of creationists that our research was featured on a creationist blog as supporting creationism. The opposite is the case: all brains possess free will precisely because there is no design in biology. Evolution is the only reason why we have free will and it is neither dualistic, nor spiritual, nor mystic: it is biological.

Like this:

Like Loading...
Posted on August 21, 2013 at 16:58 Comments Off on Flashback: All brains possess free will because there is no design in biology
  • Page 17 of 21
  • « First
  • «
  • 15
  • 16
  • 17
  • 18
  • 19
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,800 views)
  • Sci-Hub as necessary, effective civil disobedience (22,938 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,453 views)
  • Booming university administrations (12,903 views)
  • What should a modern scientific infrastructure look like? (11,433 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d