bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 169 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 88 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 196 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 502 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 747 downloads 0.00 KB
Download
Aug28

Libraries are better than corporate publishers because…

In: blogarchives • Tags: libraries, open access, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from February 17, 2012:

In the wake of the Elsevier boycott, some are asking if we should change our publication system and if so, how. In search for an answer to this question, I’ve been thinking about the best place to keep the works of academics safe and publicly accessible in a sustainable, long-term solution. Now what was historically the place where the work of scholars could traditionally be found? The libraries of course! However, given how entrenched some researchers are in the current way of disseminating research and given the opposition an international US$10 billion industry can muster, this solution to parasitic publishing practices is not obvious for everyone. So here I’m collecting the most obvious advantages of a library-based scholarly communication system for semantically linked literature and data.

  • Given a roughly 40% profit margin in academic publishing, give or take (source), the approximately US$10 billion currently being spent can be divided in 6b cost and 4b profits. If most of these funds are being spent by libraries for subscriptions, all else remaining equal, libraries stand to save about 4 billion each year if they could do the job of publishers. According to some estimates, however, costs could be dramatically lowered and thus libraries stand to save significantly more than the 4b annually. Thus, there is ample incentive for libraries and thus the tax-paying public, to cut the middleman out.
  • Libraries are already debating what their future in a digital world could be. Archiving and making the work of their faculty accessible and re-usable seems like a very satisfying purpose for anyone, especially libraries discussing about how to redefine their existence.
  • All the content would remain under the control of the researchers and not in the hands of private entities with diverging interests from both that of the research community and the general public.
  • The funding streams for such a combined data and literature archive would be much more stable and predictable than current models, especially for long-term, sustainable database maintenance. Each university pays only actual costs in proportion to the contributions of their faculty.
  • Libraries already have much of the competence to store and link data together. There are many projects on this technology (see e.g. LODUM) and plenty of research funds are being spent to further develop these technologies. Research is already going on to develop the infrastructure and tools to handle primary research data and literature at university libraries. Thus, there is not a lot that libraries really need to learn in order to do what publishers do – in fact, most libraries are already doing just that.
  • Essentially, libraries are already publishers of, e.g. all the theses of their faculty or historical texts, etc. and some insitutions even have their own university/library presses. Open access to all of these digital media and many more is being organized by, e.g. National Libraries or Networked Repositories for Digital Open Access Publications. Adding scholarly articles to all that thus doesn’t really constitute a huge shift for libraries in practical terms.
  • Researchers would have full control over the single search interface needed to most efficiently find the information we are looking for, instead of being stuck with several tools, each of which lacking in either coverage and features.
  • It would be possible not only to implement modern semantic web technology, but also very basic functionality such as hyperlinks: clicking on “the experiments were performed as previously described” would actually take you directly to the detailed methods for the experiment.
  • No more repeated reformatting when re-submitting your work.
  • There still would be plenty of opportunities for commercial businesses in this sector. All the services libraries could not or would not want to do themselves can still be outsourced in a service-based, competitive commercial environment – this would not impact the usability of the scholarly corpus as in the current status quo.
  • Everyone who prefers to look up numbers rather than reading the actual papers can still do that – only that the numbers provided by the new system would have an actual scientific basis. Every kind of metric we can think of is just a few lines of code away if we have full control over the format in which references to our work are handled. This allows not only assessment of articles and data, but also to create a reputation system that can be customized by the individual user to the task at hand, be it tracking topics, ranking scientists or comparing institutions.
  • Users can still choose to pay ex-scientists to select what they think is the best science – after publication. In fact, then their services would actually be in competition with one another and any user’s choice wouldn’t affect others who do not share that user’s opinion on the service or their research interests.
  • We easily have the funds to develop a smart tool that suggests only relevant new research papers and learns from what you and others read (and don’t read). This tool is highly customizable and suggests only the research you are interested in – and not that of other people who you haven’t chosen to do that task for you.
  • The approximately 1.5 million publications still need to be published. This means few jobs would be lost and plenty of recourse would be available after a rejection. In brief, the beneficial aspects of the heterogeneity of the status quo can be conserved.

Those are only the benefits that immediately come to mind as the most important ones. Thus, as I see it, transitioning from a corporate publisher-based to a library-based system is both practically feasible in the mid-term, would eliminate many negative factors of the current system while conserving any positive values it might have had and providing many new benefits not or difficult to obtain under the current status quo. Thus, the incentives for libraries, the public and science in general are obvious. However, the incentives for individual scientists for such a transition remain small as long as journal rank determines careers. Hence, a critical factor for such a transition is to abandon journal rank in favor of more accurate metrics. I have already presented a number of publications in which the detrimental effect of journal rank has been described and I’m working with a colleague on a review paper covering all of these comprehensively.

Clearly, there will be a debate of how to best transition from the current system to a new one. In order to minimize access issues, I propose that a small set of competent and motivated libraries with large subscription budgets and substantial faculty support cooperate in taking the lead. This group of libraries would shift funds from subscriptions to investing in developing infrastructure and other components for a library-based scholarly communication system. If, say, only ten libraries cut subscriptions on the order of their ten most expensive journals, there would be more than a million Euros available every single year with little or now disruption in access. Some of the freed funds could be used to assist affected faculty in open access publication or inter-library loan of the needed articles. In this way, within a few years, the entire currently available scholarly literature could be made accessible from a single interface using tools vastly superior to the current ones (which is technically rather easy, given the low quality of what we currently have to deal with). The combination of superior access to the literature and a reduction in the requirement to publish in high-ranking journals, should provide sufficient incentives also for young researchers to support the transition.

P.S.: An often mentioned hurdle concerns the back-issues archives of the corporate publishers. Faced with a dwindling customer-base without much prospect for future involvement in scholarly communication, these companies should have little quarrels with a single fee to make all their archives accessible to libraries for transfer into the common database. Already in the current climate, extortion by holding the archives hostage, is not an option. The more we wean ourselves from these corporations, the less support they will have.

UPDATE: Heather Morrison has chimed in with one of her excellent calculations. It is high time more people are thinking along those lines, in order to develop a strategy of how to best transition towards a modern scholarly communication system.

Like this:

Like Loading...
Posted on August 28, 2013 at 16:34 Comments Off on Libraries are better than corporate publishers because…
Aug27

Flashback: The brain creates something out of nothing

In: blogarchives • Tags: creativity, hallucination, information, spontaneity, variability

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from October 25, 2010:

Brains are what mathematicians call “information sources“. At least this is one of the results of a set of elaborate experiments together with sophisticated analyses and computations reported in the current issue of Nature Neuroscience (subscription required). The article, entitled “Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content”, studies a set of neurons in the brain’s main olfactory center, the olfactory bulb. These neurons, mitral cells, receive input from olfactory sensory neurons with their receptors in the olfactory epithelium in the nose. The authors looked at particular subsets of these neurons in the mouse brain, namely the ones that receive their input within a single identical so-called ‘glomerulus’, a neuropil structure characterized by the fact that it contains only the terminals from olfactory receptor neurons that express the same olfactory receptor. This means that the input these mitral cells receive is highly correlated. In other words, these mitral cells receive virtually identical stimulation and the authors looked at the output these neurons generate in response to the input.

In order to do this, the used patch electrodes to stimulate and record from these neurons. In order to provide realistic input scenarios, they passed more or less random amounts of current into one neuron at a time and then evaluated their output. What they found was that each neuron seemed to look at a different aspect of the input, leading to this population of cells to lose the correlation in their activity that is present in their input. Put differently, every neuron reacts differently to the same input and even if you give all mitral cells in the same glomerulus the same input (like the same smell), each mitral cell will respond with a different train of activity degrading any correlation that might have been present in the input.

This all make a lot of sense, because in this way, each neuron looks at the same stimulus from a slightly different perspective, enhancing the amount of information the animal can get from a single stimulus. From the point of view of information coding, this is an advantage, that comes with a disadvantage: some neurons will create ‘phantom’ information, information that isn’t there. The authors do not mention this with a single word, but they show it in their first figure:

urban_2010.png

In a) you can see the olfactory receptor neurons projecting into a single glomerulus and the mitral cells which receive their input. In b) and c) you see examples of mitral cells and their recordings to a single, depolarizing constant current. While one neuron, b), does what one would expect with constant input – constant output, i.e., firing – the other neuron, c), shows irregular, bursting activity that doesn’t reveal the constancy in the input signal at all. To put it drastically, one could say that this neuron is ‘hallucinating’. This example shows very clearly the drawback, in terms of information coding, of neurons decorrelating correlated input: some of the information the transmit may not be present in the sensory input at all. Already at this very early stage, the very first synapse after the sensory neurons, the brain is already altering, expanding and interpreting the sensory signal. In fact, it’s already making up its own information that is independent from the outer world. This is profound!

I’ve been thinking about the conundrum of ‘noise in the brain‘ before and already before this publication now, it has been very suggestive to argue that the variability in neural activity is not just random, pernicious noise but has some functional significance – a significance which we don’t quite understand, yet. The authors here argue that one part of that significance is to capture all the information in a sensory stimulus. I’d agree with that but from their experiments I’ll take another function which they don’t mention: the capability to react slightly differently to always the same stimulus. This capability of doing things differently every time the same situation comes along is vital for survival. Here’s a great example of what can happen to you, if you don’t do that and react always with the same response to the same stimulus:

Tentacled snakes hunt by exploiting the predictability of fish: they elicit a so-called C-start response in their prey in order to predict where the fish will be going, only to then intercept the fish’s escape path. The result: the fish swims right into the mouth of the snake. This video was published by Ken Catania, who studies these animals.

The results by Padmanabhan and Urban provide further evidence that the highly variable activity of neurons is not ‘noise’ in a complex system, but actively generated by the brain not only to increase information capacity, but also to behave unpredictably, creatively and spontaneously in an unpredictable, dangerous and competitive world.

It also means that adding information to a sensory stimulus may be a disadvantage in terms of information coding, but it wasn’t eliminated by evolution because it prevented animals from becoming too predictable – a classic cost/benefit trade-off. All this happens already at the very first synapse after sensory transduction – how much information is added in the more central stages of computing in the brain? Methinks, the question is not “why do some people hallucinate?”, the question should be, “why don’t we all?”…


Padmanabhan, K., & Urban, N. (2010). Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content Nature Neuroscience, 13 (10), 1276-1282 DOI: 10.1038/nn.2630

Like this:

Like Loading...
Posted on August 27, 2013 at 17:01 Comments Off on Flashback: The brain creates something out of nothing
Aug26

A fistful of dollars: why corporate publishers have no place in scholarly communication

In: blogarchives • Tags: Elsevier, open access, publishing, RWA, SOPA

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from January 10, 2012:

With roughly four billion US$ in profit every year, the corporate scholarly publishing industry is a lucrative business. One of the largest of these publishers is Anglo-Dutch Elsevier, part of Reed Elsevier. According to their website, their mission is to

publish trusted, leading-edge Scientific, Technical and Medical (STM) information – pushing the frontiers and fueling a continuous cycle of exploration, discovery and application.

However, Elsevier recently admitted to publishing a set of six fake journals, aimed to promote medical products and drugs by the company Merck, but with the appearance of peer-reviewed, scholarly literature. Clearly, trust is not Elsevier’s top priority.
What is Elsevier’s top priority, though, is making money. Like all scholarly publishers, Elsevier is thriving, despite global financial and economic crises in recent years:

elsevier_profit.png
How can a private company be so isolated from the general economy? The reason is that they’re largely funded by the tax payer and so far education and R&D budgets have been relatively spared. How do commercial publishers do their job? Here is a brief sketch of how a scholarly paper comes about:

  • Researchers generate data
  • Researchers write manuscript
  • Publisher’s editor sends manuscript to other researchers who peer-review the manuscript at no cost to the publisher
  • Researchers modify the manuscript
  • Researchers pay page charges
  • Publisher copy-edits manuscript and puts it online.
  • Library or researcher pays subscription fees to access article

Thus, according to their website, 7,000 paid journal editors are working for the approx. 2,000 journals of Elsevier, while 970,000 unpaid board members, reviewers and authors, largely funded by the tax payer, are donating their time, brains and other valuable resources to the corporation. With hardly any labor costs to speak of and great value provided from outside for free by tax-funded researchers, it is not surprising hat corporate publishers sport great profit margins:

Publisher Revenue Profit Margin
Elsevier £2b £724m 36%
Springer’s Science + Business Media €866m €294m 33.9%
John Wiley & Sons $253 $106 42%
Informa.plc £145 £47 32.4%

On top of very small up front costs publishers increase subscription prices manifold beyond inflation:
Quite obviously, with low costs and an ever increasing stream of tax funds burning holes in your pocket, you wonder how all the money could be invested to protect your shareholder value for the future. Therefore, commercial publishers:

  • Buy access to elected representatives
  • Use this access to lobby for protective legislation
  • Pay full-time employees for government lobbying
  • Support SOPA
  • Discredit Open Access by hiring professional ‘pit-bull’ campaigners
  • Lobby against Open Access at the US White House

Comparing commercial with non-profit publishers shows how competition doesn’t bring the price down in scholarly communication: non-profit publishers are providing a publishing service that is consistently half or less of what commercial publishers provide (see also here). Therefore, I am very skeptical of suggestions to transform the scholarly communication ecosystem into a service business. Given that commercial publishers have a proven track record of untrustworthiness, price-gouging and political interference, what would keep them from increasing their prices for the services they provide, just as they have increased subscription prices?
No, the evidence is very clear, we need to rid the scholarly communication system from commercial publishers if we want to reduce or at least limit the burden on tax-payers:

  • Just eliminating profits will cut publishing costs by approx. US$4b annually
  • 1.5 million papers will have to be published anyway, so no jobs will be lost, just transferred
  • per-publication costs will come down further as scholarly communication becomes completely online.

Any proponent of for-profit scholarly publishing first needs to explain why future commercial publishers’ behavior should dramatically change from current and past behavior, before any other arguments will even be considered.

Like this:

Like Loading...
Posted on August 26, 2013 at 16:30 Comments Off on A fistful of dollars: why corporate publishers have no place in scholarly communication
Aug23

Scientists as social parasites?

In: blogarchives • Tags: career, contracts, unemployment

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from June 20, 2011:

In a recent report on German public TV, a sad but tragically very commonplace procedure is being described as the latest, most extreme case of worker exploitation: scientists continuing to do science while being officially unemployed. scared.png

On the face of it, the report seems to be correct in claiming that people who are working despite being paid unemployment benefits are incriminating themselves – which is what people colloquially call social parasites. However, the idea of paying these benefits is to enable people to look for jobs, maybe even to go and learn a new, more promising trade. In short, to give the candidate the leeway to do everything possible to secure a new job as soon as possible. Now what is the best way to secure a job in science? To do more science, get more papers, get more teaching experience, etc.: whatever pads your CV is what will enhance your chances to get the next job in science.

Moreover, scientists are so highly specialized, unemployment agencies are not able to find adequate employment for them. This means that German unemployment offices who have experience with unemployed scientists are actually complicit: they will specifically allow you to keep working as a scientist while they pay you, because they know that this is the best thing in this situation. In fact, they realize that it would be counterproductive to force you to stop doing science. If you are a scientist in Germany and unemployed (pretty common species), go to your local unemployment agency and ask them if they’ll let you continue working and they’ll say yes, I promise. That’s how normal this situation has become, with 65% of all jobs in German science being on short-term contracts.

This doesn’t mean that it’s a good thing that the money for science in Germany comes partly from the social budget and not exclusively from the science and education budget. Far from it. It is despicable that in some departments, the majority of PhD students will write their dissertation on unemployment benefits. If it weren’t so sad, it would be laughable that so many of the ‘excellent elite’ PhD students who were able to secure a scholarship for their PhD, will not even get unemployment benefits, but have to finish their degrees on welfare. There’s nothing positive about scores of postdocs finishing their last experiments and writing their papers while simultaneously writing the grant for the next projects and being paid from the wrong government branch. But it is neither criminal nor some exploitation perpetrated by unscrupulous professors in some isolated departments at German universities. It’s just the deplorable standard situation in a totally messed up scientific career system and it’s not even the worst part, not by a long shot.

And here is another reason why this practice will be difficult to change: the German taxpayer saves a lot of money. Think of this one scientist in the report who worked on short-term contracts for 17 years. For four years he was on benefits, which in Germany is 60% of your last salary. This means that in this time, the German taxpayer, who would have been paying him a full salary, only paid 60% for the same service and part of that is even paid for by the scientist himself in the form of his unemployment insurance. That’s some major savings right there.

What would the rational reaction of German politicians be to this report? Certainly not to try and prevent this gigantic money saver! More likely, one would expect them to demand to cut the salaries of scientists by 40% because 60% seems to be sufficient to keep them going. devilmad.png

Like this:

Like Loading...
Posted on August 23, 2013 at 16:25 1 Comment
Aug21

Flashback: All brains possess free will because there is no design in biology

In: blogarchives • Tags: evolution, free will, spontaneity

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from June 9, 2010:

I have no idea when it started, but probably long before Darwin, the notion of ‘design’ kept creeping into descriptions of biological organisms or traits: Birds are designed to fly or the eye is designed to see. I’ve also been guilty of using the word ‘design’ for biological objects every now and then. However, after reading a fair bit and after thinking for some time now, I’ve come to the conclusion that the word ‘design’ is so misleading and not even wrong, that it should never be used in biology at all, ever.

I’m only partially saying this because of a prominent creationist movement in the US awfully dubbed ‘intelligent design’ (which couldn’t be further from biological reality). My main reason is that the use of ‘design’ is pernicious for biological research. ‘Design’ or engineering approaches have been used over and over in biology and many if not most of them have gone the way of Newtonian mechanics: extremely useful and successful initially (and to some extent still today), but scientifically falsified eventually. Take a rather recent, prominent example: Lee Hood‘s automobile paradigm, according to which systems biology is akin to finding car parts without a manual in the fog and then trying to assemble the car. Current research shows that genetic networks are, in contrast to car components, highly plastic and forgiving of errors (a phenomenon termed robustness, often due to degeneracy in evolved systems). Even though the car analogy might seem daunting, it is still grossly oversimplified and biologically misleading. Indeed, expecting genetic networks to behave as stably as car components will lead to the wrong experiments and the wrong conclusions. Nevertheless, systems biology was and still is a very successful branch of biology. I’m not a systems biologist, but I would be surprised if many systems biologists still bought into the car analogy these days any more, after what they have found out until today.

For the same reasons, there is no blind watchmaker, because there is no watch. The analogy may still be useful in a rule-of-thumb kind of way, but biologically it is completely false.

Another, much older engineering approach is that of brains as input-output systems in neuroscience (the ‘sensorimotor hypothesis’), which purports that brains passively wait for stimuli to occur in order to respond to them, much like radios, computers or other equipment that we have designed. In some fields of neuroscience (and psychology) this approach has been so pervasive that any behavior is referred to as a response, assuming that there always must be an underlying stimulus triggering the behavior. It is not only recent ecological and ethological research on predators specializing in exploiting stereotypic behaviors in prey species which shows that being responsive is not an evolutionary stable strategy: responding reliably to the same stimuli in the same predictable way will neither get you to the unexpected food patch nor prevent a predator or competitor from predicting your next move. “Nature red in tooth and claw” will make sure that any predictable species will not last for long. There is a very good ultimate cause for the Harvard Law of Animal Behavior: “Under carefully controlled experimental circumstances, an animal will behave as it damned well pleases”: species which didn’t obey the law have not survived. It thus appears that for the last 100 odd years, neuroscience has been studying the exceptions to the rule that brains are always active and are constantly producing output, on which sensory stimuli merely exert a modest, modulatory role. I think it is fair to assume that one reason for this research direction (apart from the relative experimental ease) is that there is no mechanism or object we have designed that works in such a way. Everything we have made responds to commands and so people thought this is how brains operate.

Given the success of this approach in biology and of Newtonian mechanics in physics, it is no surprise that some thinkers have come to see brains as deterministic Newtonian clockworks. Complicated, maybe, but deterministic and predictable none the less. Any observed behavioral variability was shrugged off as random noise, when indeed it was the one brain function that kept animals in the run for the next generation. It is high time that neuroscientists realize that Newtonian mechanics are as falsified in biology as they are in physics: the world is not deterministic and neither are brains. The fact that brains are not engineered, but evolved allows for freedom of choice without quasi-magical quantum computing in the brain. All the brain requires to be unpredictable is some source of variability from which it can generate spontaneity, and there is plenty of such variability in neurons and their components, with or without quantum effects. The selection pressure of predictable animals being outcompeted, eaten or left without a mate established early on that every brain is equipped with a function which allows for adaptive behavioral choice. In some animals, there was need for more of such capabilities. Those animals seem to have more freedom of choice. Other animals could get away with being more predictable, giving the impression of being less ‘free’ and more robot-like. I’m starting to get the impression that the model systems used in neuroscience are predominantly of the latter sort. smoking.png

Be that as it may, all animals possess this trait to a larger or lesser degree and have been using it for survival and procreation since the very first brain evolved. I think it is time biology sheds the last remnants of classical thinking and starts to study ‘free will’ as the biological trait it is: the ability to behave differently in the same situations, the ability to chose from identical options, the mental coin toss. The centuries of philosophical thinking on this topic have provided us with a wonderful framework within which these empiricial findings can be embedded. Because brains are output-input systems (rather than the other way around), our lab has started to study the spontaneous choices of the fruit fly Drosophila and how the brain generates them. Currently, a graduate student in our lab is studying where in the brain these processes take place in order to later be able to understand how the circuits mediating these choice function. It is a testament to the idiocy of creationists that our research was featured on a creationist blog as supporting creationism. The opposite is the case: all brains possess free will precisely because there is no design in biology. Evolution is the only reason why we have free will and it is neither dualistic, nor spiritual, nor mystic: it is biological.

Like this:

Like Loading...
Posted on August 21, 2013 at 16:58 Comments Off on Flashback: All brains possess free will because there is no design in biology
Aug20

Flashback: Noise in the brain?

In: blogarchives • Tags: free will, noise, spontaneity, variability

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from May 8, 2009:

I was recently alerted to a group of theoretical publications which deal with the issue of apparent ‘noise’ in neuronal populations. The Nature Reviews Neuroscience article “Neural correlations, population coding and computation” by Bruno B. Averbeck, Peter E. Latham &  Alexandre Pouget covers this area quite well.
Basically, the authors claim that the variability one can see when recording from the brain when the same stimulus is presented repeatedly is noise and must be detrimental for the tranmission of information and hence a problem the brain must solve:

Part of the difficulty in understanding population coding is that neurons are noisy: the same pattern of activity never occurs twice, even when the same stimulus is presented. Because of this noise, population coding is necessarily probabilistic. If one is given a single noisy population response, it is impossible to know exactly what stimulus occurred. Instead, the brain must compute some estimate of the stimulus (its best guess, for example), or perhaps a probability distribution over stimuli.

It needs to be noted that the authors do not refer to sensory neurons, which code sensory information with great precision. Instead, they look at neurons deep in the mammalian brain, many synapses away from the primary sensory afferents. What I don’t understand from their article is why this should be considered ‘noise’. Obviously, if high fidelity between the site of sensory input and the site of recording were required, there would be a single axon going there, and not via many syapses. Synapses are time consuming and energetically expensive. During development, unused synapses are being pruned throughout the brain. Thus, the variability must reflect some computation which takes place in the synapses from the site of sensory input to the site of recording. Let me illustrate this with a picture:

neuralnoise.png

Of course, if one records from neuron NR and stimulates sensory neuron NS, as in A, there is a lot of processing going on in the synapses along N1-4. This is happening even without any external input into the single conveyor belt of information. Of course, there never is such a conveyor belt, that idea is already misleading. After the very first synapse (from wherever you start), there are always inputs from other sites, feed-forward and feedback connections, etc. But even in this simplest model of information transmission, every synapse is a computational component and not just a link between neurons adding variability to the sensory signal for no reason. These synapses would not be there if this processing was not some important brain function. This is illustrated in B: if simple and reliable information transmission from NS to NR were important, there would only be one single axon from NS to NR, without any additional processing.

I would really like to hear good arguments as to why the recorded variability should be ‘noise’ and a problem for information coding, rather than a reflection of the brain doing what it’s supposed to do: finding out what the best action is under the current circumstances.


Averbeck, B., Latham, P., & Pouget, A. (2006). Neural correlations, population coding and computation Nature Reviews Neuroscience, 7 (5), 358-366 DOI: 10.1038/nrn1888

Like this:

Like Loading...
Posted on August 20, 2013 at 16:55 Comments Off on Flashback: Noise in the brain?
Aug19

Science, red in tooth and claw

In: blogarchives • Tags: open access, sabotage, stress, tenure

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from September 30, 2010:

I’ve been contemplating the current competitive state of science here before. The gist of it was that science may be suffering from too much competition, leading to an increasing incidence of misconduct, such as falsifying or omitting inconvenient data. In this week’s Nature appeared a news-report which tells me that apparently the pressure cooker which is our scientific community apparently has been set to a notch hotter recently.

The article is about the sentencing of a postdoc who has been sabotaging the work of a graduate student in his own lab for several months. The postdoc has admitted – after being caught on secret camera – to have poured ethanol in the culture medium for the cell culture of the graduate student as well as contaminating Western blots of the student.

In my previous posts I speculated that fraud and other, related misconduct may be on the rise as more and more trained scientists are faced with less and less positions for which they compete. I did not expect this form of misconduct and neither did anybody else: sabotage is not officially part of the canon of research misconduct and thus no federal agency was able to prosecute the perpetrator.

As all of the cases which make it into the media, this is another anecdote and the plural of anecdote is not data. Nobody knows if the frequency of misconduct is rising or if the publicized cases simply come with the growth of the business. Nevertheless, this form of direct sabotage of researchers by researchers seems to be a novel quality of competition-induced research misconduct.

What will we see next?

Hot on the heels of the stunning sabotage case comes a seemingly unrelated report in The Scientist that US libraries are forced to cut scientific journal subscriptions by the thousands due to budget cuts:

New Mexico State University (NMSU) library announced the cancellation of over 700 journal and database subscriptions, the result of a perfect storm of rising journal prices and a slashed materials budget. […] A 2009 global survey of 835 libraries in 61 countries found that nearly one-third of academic libraries saw their budgets reduced by 10 percent or more that year. And journal subscriptions are taking the brunt of that loss: The University of California at San Francisco (UCSF) cancelled 118 print and 115 online subscriptions for 2010, as well as several databases (including Faculty of 1000 Medicine, publisher of The Scientist). Last spring, the University of Washington announced cuts of 1,600 print and electronic journals, databases, and microforms. The University of Virginia library sliced 1,169 journals, the University of Arizona downsized by 650 print and electronic titles, and Georgia State University cut 441 and is now considering the fate of another 1,092. The list goes on and on.

While on the surface these two reports seem unrelated, they in fact demonstrate the coming scientific system: too many highly trained scientists competing for a position in a system that is so cash-strapped that even access to the literature is threatened. Access to the literature is the most basic prerequisite for doing science. One of the quotes in the article demonstrates that: “I need the EBSCO databases like I need air or water!”

Just as now access to drinking water is a human right, access to the literature should be a scientific right.

German psychologist Bally in the 1940s and later ethologist Hasenstein in the 1970s coined the term “eased-up field” (entspanntes Feld) for a situation where all the basic needs are satisfied. They found that exploratory or playful behavior decreased in virtually all subjects tested, be it human or animal, whenever the animals were not in an eased-up field. Currently, science almost on a world-wide scale is heading towards a situation in which each scientist’s livelihood is threatened as well as one of the basic prerequisites for doing science, access to the literature. Neither of these trends alone bodes well for the quality of future science, but both trends together serve to basically block the emergence of the creative genius required for major scientific breakthroughs.

Like this:

Like Loading...
Posted on August 19, 2013 at 21:21 1 Comment
Aug18

Flashback: Nothing new in science?

In: blogarchives • Tags: behavior, decision-making, Drosophila, neurogenetics, phototaxis

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from February 9, 2011:

 

In this week’s journal club, we talked about an old paper from 1918! “The reactions to light and to gravity in Drosophila and its mutants” by Robert McEwen, in the Journal of Experimental Zoology.

As the title says, the author studied how the fruit fly Drosophila responds to light and gravity. He tested this in walking flies and compared flies both with intact wings and clipped wings, wing mutations, clipped antennae, glued wings or clipped middle legs. He discovered that flies without wings or with mutated wing shape, show less movement towards light (i.e., less phototaxis). This finding was later confirmed by one of the founders of modern neurogenetics, Seymour Benzer (1967) and we also find this in our experiments. We have now set out to find out which neuronal mechanisms are involved in this drastic change in behavior.

In order to get the flies to show phototaxis, McEwen developed a machine to gently tap the tube in which the flies were placed, to get them to walk. He described the necessity for flies to be active in order to show a consistent orientation towards a light source: without walking behavior being either initiated spontaneously or by the tapping machine, flies would not walk towards the light themselves. If the flies were at rest, light was not an orienting stimulus for them. This key insight was formulated by McEwen at the very end of the paper:

Lastly, it may be well to emphasize the peculiar relation which exists in Drosophila between general activity and phototropism. This phenomenon has been clearly recognized by Carpenter and in general I agree with this author’s conclusions. The fact seems to be that this insect is not phototropic unless it is in a certain physiological state brought on by, or at least accompanied by, activity. When the fly reaches a certain degree of activity, induced by various means, it suddenly becomes phototropic. When it quiets down, however, it may still crawl about but ceases to be phototropic. Thus, when an insect has been exposed to constant illumination for some time, it no longer orients to light but wanders aimlessly up and down the tube. Eventually such an animal may even come to rest with its head away from the source of light.

The technique described mimics what other colleagues have later developed in other fly paradigms based on vision and walking, such as the “fly-stampede” paradigm. But the insight reaches much further than that. More recent research has shown that the state of the animal has minute control over how the environment is processed. For instance, leeches respond with various behaviors to local mechanosensory stimulation (i.e., touch). However, when they feed, the biogenic amine serotonin is released and prevents the mechanosensory neurons from transmitting the stimuli – the animal becomes unresponsive when it feeds (Gaudry & Kristan, 2009). Another study showed that motion-sensitive neurons in the optic lobes of the fly brain increase their gain when the fly is flying, as opposed to when it is not flying (Maimon et al., 2010). Analogous results were obtained when walking vs. sitting flies were compared (Chiappe et al. 2010). In another, also very sophisticated study, Haag et al. (2010) showed how an identified motor neuron responds more strongly to visual input when the animal is flying than when it is at rest. Finally, Tang and Juusola (2010) report evidence that the direction in which a fly attempts to turn changes the way in which the optic lobes process the visual information on the side towards the fly attempts to turn, compared to the contralateral side.

All these groups have, largely independently of each other, discovered the biological mechanisms for something that already McEwen (and Carpenter, cited there) had understood: animals don’t just respond to stimuli in always the same, stereotypical way: all animals have many different ways to treat and evaluate the incoming sensory stream, depending on what they are doing at the moment. The decisive factor for understanding animal behavior is not the environment, or the sensory organs, it is the animal itself. Apparently, this profound insight was known long ago and we’re just rediscovering it now, in various places, all over the world.

Something was new in all the recent studies, though: they all provide first mechanistic insight into how brains balance internal and external processing. All these studies show that there seems to be a smooth gradient between decision-making and attention-like processing, even in invertebrates: Gaudry and Kristan call it decision-making, when their leeches ‘decide’ to ignore stimuli while they feed, even though the incoming sensory stimuli are blocked already at the very first synapse. Chiappe et al., on the other hand, relate their phenomenon to attention and Haag et al. also mention attention in their paper, with their effects being observed many synapses downstream of the sensory neurons – the word ‘decision’ does not occur in either of the two papers. It appears as if future neurophysiological research is bound to show that the distinction between attention-like mechanisms and decision-making, which seems so intuitive and clear-cut, may dissolve when we start to unravel how brains actually do it. Now when will we come accross the ancient text that already pre-empts that insight? nuetral.png

References:


McEwen, R. (1918). The reactions to light and to gravity in Drosophila and its mutants Journal of Experimental Zoology, 25 (1), 49-106 DOI: 10.1002/jez.1400250103
Zhu, Y., & Frye, M. (2009). Neurogenetics and the “fly-stampede”: dissecting neural circuits involved in visual behaviors Fly, 3 (3), 209-211 DOI: 10.4161/fly.3.3.9139
Gaudry, Q., & Kristan, W. (2009). Behavioral choice by presynaptic inhibition of tactile sensory terminals Nature Neuroscience, 12 (11), 1450-1457 DOI: 10.1038/nn.2400
Maimon, G., Straw, A., & Dickinson, M. (2010). Active flight increases the gain of visual motion processing in Drosophila Nature Neuroscience, 13 (3), 393-399 DOI: 10.1038/nn.2492
Chiappe, M., Seelig, J., Reiser, M., & Jayaraman, V. (2010). Walking Modulates Speed Sensitivity in Drosophila Motion Vision Current Biology, 20 (16), 1470-1475 DOI: 10.1016/j.cub.2010.06.072
Haag, J., Wertz, A., & Borst, A. (2010). Central gating of fly optomotor response Proceedings of the National Academy of Sciences, 107 (46), 20104-20109 DOI: 10.1073/pnas.1009381107
Tang, S., & Juusola, M. (2010). Intrinsic Activity in the Fly Brain Gates Visual Information during Behavioral Choices PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0014455

Like this:

Like Loading...
Posted on August 18, 2013 at 10:28 1 Comment
Aug16

Scientific discoveries are like orgasms: you can’t have any bad ones

In: blogarchives • Tags: dunning-kruger, incompetence, publishing

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from June 24, 2010:

What is it with non-scientists trying to go and save science from the scientists? First, there’s an English major at Scholarly Kitchen trying to tell us we should stick with a 400 year-old publishing system despite more modern systems being widely and readily available. Then the same blog sports a post wondering why their 400 year-old system they just touted doesn’t seem to operate as they imagine. So first these guys write a post to make sure every scientist who reads it understands that they have no clue and then they wonder why their clueless and hence baseless predictions don’t materialize? WTF? peeved.png I have an answer for The Scholarly Kitchen: because what you imagine how science works has nothing to do with reality!

Then there is a completely weird article in The Chronicle of Higher Education on how we need to get rid of all ‘low-quality’ science. This piece, probably not unexpected by now, was written by English, management, mechanical engineering and geography professors. The lone medical researcher in the group does have a fair amount of PubMed articles, but none of them are in one of the supposedly high-quality journals mentioned in the article, so he basically just called himself ‘low-quality’ and thus should be struck from the public record, grin.png Moreover, the articles where he was single author or one of two authors are all on clinical practice and testing, raising the tentative suspicion that medical practice is really his strong area of expertise, rather than science. Which means that, again, none of these guys is actually a scientist, i.e., working in physics, chemistry, biology and regularly publishing scientific, experimental papers, which makes up the bulk of the scientific literature. If they were, they would know that there isn’t such a thing as a ‘low-quality’ scientific discovery. Scientific discoveries are like orgasms: you can’t have any bad ones. Now, I agree that there are badly conducted experiments, missing control procedures and outright fraud. None of these examples are eliminated by reducing scientific output, obviously. The authors make sure that this is not what they mean, as they refer to ‘low-quality’ science as journals or papers that aren’t cited. Obviously, hi-profile fraud cases are cited a lot. One reason for low citation counts is that very few scientists understand the topic and/or are interested in it. Clearly, this can change in a heartbeat and what was boring one day can be all the rage tomorrow. Only someone not steeped in scientific research would not be aware of that. Not surprisingly, this article has received a thorough smackdown in the comment section over there and in the blogosphere.

And then finally, to cap it all off, this completely inane post, riddled with factual errors, ludicrous assumptions and outright slander. The author characterizes himself as the person who trademarked the term “Science 2.0”. Moreover, in the comments, he gives it away: “I’m not a researcher” (as if that wasn’t already blatantly obvious from the post). I wonder why he’s even touching his keyboard such that it generates these nonsensical sentences? This post contains about as much valuable and accurate information as if a monkey had sat on the keyboard. By his own admission, he has about as much competence in this topic as a monkey’s rear end. This guy could probably just go and give it a shot trying out for the LA Lakers – at least he couldn’t be any less qualified than for his chosen topic.

I’m a biologist. Do I go to engineers and tell them how to build bridges? Do I try to play in the NBA? Do I tell BP to just put a lid on it? Sheesh, why are so many people trying to outcompete each other to exemplify the Dunning-Kruger effect these days? These guys are even more pathetic than Rupert Pupkin in The King of Comedy. What’s with this current slate of ignorant imbeciles trying to grandstand as if they had any relevant competence whatsoever and address an international group of hundreds of thousands of professionals with the actual expertise and experience? Where do these guys get the idea they have anything meaningful and worthwhile to contribute? What’s gotten into them?  A different collusion of delusion? elated.png

claimtoken-520f68e90ad03

Like this:

Like Loading...
Posted on August 16, 2013 at 09:15 Comments Off on Scientific discoveries are like orgasms: you can’t have any bad ones
Aug15

Flashback: Creationists, this is the evidence you have to beat!

In: blogarchives • Tags: archeology, creationism, evolution, genetics, language

During my flyfishing vacation last year, pretty much nothing was happening on this blog. Now that I’ve migrated the blog to WordPress, I can actually schedule posts to appear when in fact I’m not even at the computer. I’m using this functionality to re-blog a few posts from the archives during the month of august while I’m away. This post is from April 19, 2011:

 

The last decades of research on human evolution have provided an astounding body of converging evidence for an African origin of the human lineage just under about 200k years ago, with a subsequent migration across the globe starting around 60k years ago until all the main regions of this planet were inhabited by humans at around 15k years ago. Compare this scenario to the creationist story, where humans were shaped by a magic man out of clay about 6k years ago, which means it happened just after the Sumerians have invented glue.What is the converging evidence telling the “Out of Africa” story?It all started with fossils and artifacts. Archeology, with its own dating techniques and collection methods, suggested a route that looks something like this:


(Image source)

Later, genetic evidence came along. Geneticists, with their own dating techniques and experimental methods suggested migration routes that looked something like this:


(Image source)

After the genetic evidence, came evidence from a bacterium associated with humans: Helicobacter pylori. It lives in our guts and can cause stomach ulcers. It’s been associated with our digestive tract for many thousands of years. Looking at the different strains of these bacteria, microbiologists, using their own dating techniques and experimental protocols, deduced that this bacterium in our guts must have traveled roughly along these routes:

hpylori_evol_small.jpg
(Image source)

Most recently, linguists came along and studied the phonemes that make up 504 of the different human languages around the globe. These linguists adopted the analysis tools from the geneticists to their own dating techniques and sampling methods and came up with a map that suggested the following main routes along which the human languages seem to have developed:


(Image source)

Clearly, getting the dates correct using phonemes will prove a lot harder than the previously used dating techniques. Nevertheless, these are four independent lines of evidence, collected over many decades by scientists with vastly different backgrounds and training. Yet, the results agree to an astonishing extent.

However, this doesn’t mean it’s ‘true’ or ‘scientifically proven’. It only means that this is the best humans can currently possibly do and anything new that comes along must not only explain the current congruence of disparate data, but also explain more data, than the current ‘Out of Africa’ theory can explain. A few self-contradictory passages in an ancient text do not even begin to come close to being a contender.

Given this sort of evidence, it becomes rather obvious that creationists are either uninformed or unpersuadable. For the former, information like the one in this post should be more than sufficient to falsify the creationist dogma. For the latter, ridicule and derision is the best response.

This post was inspired by Lapidarium Notes.


Pertinent peer-reviewed literature:

  • Green, R., Krause, J., Ptak, S., Briggs, A., Ronan, M., Simons, J., Du, L., Egholm, M., Rothberg, J., Paunovic, M., & Pääbo, S. (2006). Analysis of one million base pairs of Neanderthal DNA Nature, 444 (7117), 330-336 DOI: 10.1038/nature05336
  • Linz, B., Balloux, F., Moodley, Y., Manica, A., Liu, H., Roumagnac, P., Falush, D., Stamer, C., Prugnolle, F., van der Merwe, S., Yamaoka, Y., Graham, D., Perez-Trallero, E., Wadstrom, T., Suerbaum, S., & Achtman, M. (2007). An African origin for the intimate association between humans and Helicobacter pylori Nature, 445 (7130), 915-918 DOI: 10.1038/nature05562
  • Atkinson, Q. (2011). Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa Science, 332 (6027), 346-349 DOI: 10.1126/science.1199295

Like this:

Like Loading...
Posted on August 15, 2013 at 18:16 Comments Off on Flashback: Creationists, this is the evidence you have to beat!
  • Page 18 of 22
  • « First
  • «
  • 16
  • 17
  • 18
  • 19
  • 20
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,084 views)
  • Sci-Hub as necessary, effective civil disobedience (23,038 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,918 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Buridan's Paradigm
Buridan's Paradigm

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d