[featured_image]
Download
Download is available until [expire_date]
  • Version
  • Download 1867
  • File Size 18.02 MB
  • File Count 1
  • Create Date October 31, 2018
  • Last Updated October 31, 2018

Neurobiological mechanisms of spontaneous behavior and operant feedback in Drosophila

Actions are followed by consequences, and each of these consequences has a subjective value. The value assigned to these consequences shape our future actions in what is often called “learning by doing”. But how is value conferred in the brain? Biogenic amines have been found to be involved in this process in a variety of different animal preparations as well as in humans. To start addressing this question in the fruit fly Drosophila, we have performed experiments where the flies control the on/off state of different subsets of dopaminergic neurons via optogenetics. With these experiments, the animals report to us whether such neuronal activity is experienced as appetitive or aversive. As one major first result, we discovered that the appetitive or aversive role of these neurons varies across operant (feedback) situations and apparently has little relation to their role observed in classical (feedforward) situations. In other words, a dopaminergic population of neurons sufficient to serve as an appetitive unconditioned stimulus in classical conditioning may be sufficient to serve as punishment when brought under operant control. These results suggest fundamentally different neuronal mechanisms underlying operant and classical learning processes, even at the level of the biologically significant stimulus. By using three different experimental setups and analyzing the differences both within and between them, we found that the reinforcing value of some of the tested neurons is context dependent. We therefore classified the tested neuronal subsets into two groups: context-dependent reinforcers and general reinforcers. To corroborate the results with optogenetically activated neurons, we also performed experiments where flies were allowed to optogenetically inhibit these same neuronal populations. Reinforcement is essential for operant learning, allowing individuals to find more optimized action strategies despite compromising behavioral variability. We are interested in observing how variability in the behavioral repertoire develops over time while an individual learns. Finally, to determine if and to which degree such operant feedback alters the temporal dynamics of otherwise spontaneous actions, we used nonlinear forecasting methods to analyze the temporal structure of spontaneous choices before and after operant learning. The complementary approaches described here are part of a research program designed to converge on a circuit-level understanding of behavioral variability and its modification by reafferent stimuli.

(Visited 9 times, 9 visits today)
Posted on  at 15:19