Universities worldwide currently face a pivotal choice: should they contribute to building a global infrastructure for exchange, science, and discourse, free from the control of oligarchs, to promote democracy, human rights, and digital participation? Or should they continue advertising on private networks, hoping for clicks and marginally increased student enrollment? The Fediverseserves as a litmus test for universities globally: will they align themselves with the likes of Trump, Musk, Zuckerberg, Bezos, et al., or actually enact their mission statements by counteracting privatized disinformation and the ensuing erosion of democracies?
Platforms such as FriendFeed, StudiVZ, and Twitter have historically functioned as digital spaces where academics, researchers, and students gathered to discuss science, exchange ideas, and foster progress. These platforms, however, shared a common fate: they were sold to private owners, subjected to market forces, and eventually fell victim to what Cory Doctorow called “enshittification”—a cycle of increasing commercialization and exploitation, ultimately leading to user abandonment. Currently, Elon Musk’s rebranded platform X (formerly Twitter) is undergoing precisely this process of enshittification and mass user exodus.
Unlike platforms such as Facebook or WhatsApp, which prioritize connections among fiends/acquaintances and operate as “social graphs,” platforms like FriendFeed, StudiVZ, and Twitter were designed to establish “interest graphs,” enabling users to connect based on shared interests. This distinction explains why scholars and those engaged in intellectual pursuits have traditionally gravitated toward the latter platforms. As X collapses under its new management, these users are now migrating to platforms like Meta’s Threads or Twitter founder Jack Dorsey’s Bluesky. Yet, it is only a matter of time before the same cycle of enshittification and exodus repeats on these platforms.
A unique opportunity now presents itself, one that was unavailable during the decline of FriendFeed or MySpace. Universities, in particular, stand to benefit from this alternative: the Fediverse. The Fediverse offers an avenue to translate their mission statements into tangible action, in particular for institutions committed to “creating systematic spaces for learning and experience to promote sustainable development,” are aware that “their credibility is measured by how exemplary sustainable solutions are implemented within their sphere of responsibility,” uphold “openness, transparency, and participation” as fundamental principles, ensure “equality of opportunity as well as freedom from discrimination at all levels,” consider the “plurality of worldviews and ways of life,” and maintain a “critical distance from political and societal power” (quotes from the mission statements of some Berlin universities). These same principles inspired universities and public institutions over 30 years ago to develop the decentralized Internet and email, technologies that remain resistant to take-over by private entities still today. Since 2018, the Fediverse has operated on similar principles of decentralization, utilizing public protocols to enable communication between independent servers. Among its many services, Mastodon is especially relevant for universities.
Mastodon functions much like other microblogging platforms—users can post short updates, which are visible to their followers. However, Mastodon differs in its decentralized architecture: users choose their “Mastodon instance,” akin to selecting an email provider. Instances federate with one another, but unlike email, problematic instances can be excluded from the network. For example, Trump’s “Truth Social” is a Mastodon instance that does not federate with others, preventing Mastodon users from seeing posts on Truth Social unless they belong to that specific instance. Universities could harness this decentralization to establish their own instances, shaping the discourse in alignment with their values, rather than the profit-driven motives of private platforms. By doing so, they could create spaces committed to democratic principles and intellectual engagement, free from the manipulations of platform owners.
This makes it immediately clear why it is futile to engage with the likes of Musk and is fanbois on his own platform. It is as pointless as trying to run a “beware the beginnings” campaign in a Nazi pub where everyone simply covers their ears and sings the “Horst-Wessel-Lied“. Similarly, attempting to defend democratic values on Musk’s platform is futile. Not only are these values constantly trampled on from all sides, including by the platform’s owner, but posts defending such values have no chance of being heard. They are systematically suppressed, while content promoting extremist and anti-democratic ideologies—hate, disinformation, racism, sexism, antisemitism, and conspiracy theories—is actively amplified and continuously disseminated. In such a hostile and inhospitable environment, meaningful democratic debate is no longer possible. The same vulnerabilities inherent in X are likely to eventually afflict other private platforms, such as Bluesky, as they succumb to market forces and the insatiable greed of their owners. A more effective approach would be for universities to establish their own digital spaces, reflecting their core values and fostering future generations’ commitment to democratic ideals.
While the current migration of users from X to Bluesky may resemble desert wanderers seeking water in the Namib, there are reasons for their choices. For one, Bluesky has invested millions (including funding from Trump-aligned investors like Blockchain Capital) to mimic Twitter’s interface. Mastodon, by contrast, lacks such resources and, moreover, has deliberately avoided implementing addictive algorithms and toxic design choices. Second, once a small imbalance was achieved, the migrating users were followed by their, ahem, followers, so far to Bluesky. And third, the absence of widespread institutional support for Mastodon since its inception in 2018 has contributed to its slower adoption. As a result, there is no widespread infrastructure for the Fediverse today, akin to what institutions established 30 years ago for web and email servers—technologies that have withstood neo-feudal control and continue to serve as the backbone of online life. From this perspective, universities could arguably seen as bearing some responsibility for the rise of disinformation and populism, as they failed to proactively support the development of a democratic, truth-oriented, digital public sphere in a timely manner.
Universities must now decide how they will position themselves in the realm of social media. To date, most institutions have failed to reconcile their mission statements with their social media strategies. Rarely have they implemented “exemplary sustainable solutions,” promoted “equality and freedom from discrimination,” or maintained “critical distance from political and social power.” Instead, many seem to confuse social networks with some sort of cheap television: one can post advertisements for free and see the “ratings” as follower counts. This mentality explains the initial hesitation to abandon X (“so many followers are still there!”) and now the gradual shift away from it (“we don’t want to appear complicit!”). Such behavior reflects the broader impact of neoliberal ideologies that have reshaped universities into competitive, market-driven entities, eroding their commitment to collaboration and collective progress. The past 30 years of neoliberal ideologization through “New Public Management,” “Corporate Universities,” and “University as a Business” have evidently achieved striking success. Each institution prioritizes its own interests in the competition for students and funding, systematically neglecting synergies through cooperation. “Show over substance” reduces the published mission statements to idle gossip. In times of tight budgets and heightened competition (when are budgets ever not tight?), it is hardly surprising that university administrations would rather allocate two full-time equivalents to polishing the data required by ranking agencies than to genuinely improving university operations—such as setting up a Mastodon instance. The oft-cited (and likely misattributed) dictum from Albert Einstein that “not everything that can be counted counts, and not everything that counts can be counted” seems to have been entirely forgotten.
In the coming months, universities’ actions will reveal whether they succumb to neoliberal dogma or uphold their mission statements. Who will leave X when? Where is which university migrating to? Which institutions will establish Mastodon instances? One could use such numbers for a new kind of university ranking…
For 14 years, the main research funding agency in Germany, the German Research Foundation (DFG) has stated in its guidelines that submitted grant proposals will be assessed primarily on the basis of their content, rather than counting the applicants’ previous publications. However, not all of DFG’s panels seem to be on board.
In the so-called “normal application procedure” of the DFG, research grant proposals are evaluated by a study section panel after formal peer-review. This panel then recommends the proposed projects to the main funding committee either unchanged or with a reduced budget or not at all. In times like these, when the number of eligible applications always exceeds the budget, it is not uncommon to find budget cuts even for approved applications. So when one of my own grant proposals (an extension of a previously funded grant) was evaluated recently, I wasn’t surprised to find that one of the two doctoral positions I had requested had been cut, rendering the proposed project unfeasible. This wasn’t the first time that such cuts had forced us to use the approved funds for a different project.
In this case, however, one aspect was different from all previous similar cases, which irritated me quite a bit. The “Neuroscience” panel, which is responsible for my proposals, provided a total of two sentences to justify the cutback:
“However, we consider the progress of the first funding period to be rather modest. Even taking the pandemic conditions into account, the output of […] the first funding phase […] with one publication in Journal X, one preprint […] and four abstracts can only be considered moderate.”
What is so irritating about these two sentences is the emphasis on “content assessment” in the DFG guidelines for research assessment. The content of the two publications, which the panel considers “moderate” in number, represents no less than the answer to a research question that first brought me into the neurosciences as a student – and that I have fought hard to answer for thirty years now. Such a “content assessment” may be irrelevant to the “Neuroscience” panel despite the DFG guidelines, but for our research it was the big breakthrough after three decades of painstaking work.
Apart from the question of why a panel of elected professors is needed to count the publications of an applicant, the DFG has been keenly aware of the problems that arise when using publication metrics (such as counting publications or publication venues) since at least 2010. In the following, it is important to distinguish between the DFG as a collective organization (it is classified as a registered non-profit in Germany) with its head office and employees on the one hand, and its reviewers, study section panels and other committees on the other. The positions of individual committees or members of the DFG do not always necessarily have to correspond to the position of the DFG as an organization. When “the DFG” is mentioned in this post, I’m referring to the collective organization. In all other cases, members or committees are explicitly named.
Pernicious incentives
Fourteen years ago, the DFG changed the rules of its research assessment to limit the use of metrics. At the time, they cut down to ten the maximal number of publications that can be submitted in support of a grant proposal, and this was their justification:
In the course of performance assessment […] it has become increasingly common to create quantitative indicators based on publication lists and to allow these to replace a content-based assessment of scientific work. This puts a great deal of pressure on scientists to publish as many papers as possible. In addition, it repeatedly leads to cases of scientific misconduct in which incorrect information is provided in the publication list regarding the status of publications. […] The DFG regrets these developments […]. However, it sees itself as obliged […] to emphasize that scientific content should be the deciding factor in evaluations in DFG procedures. The limitation of publication information to a smaller number of publications is associated with the expectation that these will be appropriately evaluated in terms of content during the review and funding decision-making process. [1]
For 14 years now, applicants have therefore only been allowed to list a maximum of ten publications in their CVs when applying to the DFG. The aim here is that scientific and non-numerical criteria should play the decisive role in research assessment. This goal was apparently so important and central to the DFG that these ideas were even incorporated into the DFG’s 2019 code of conduct, “Guidelines for Safeguarding Good Scientific Practice” (the “Kodex”):
Performance evaluation is based primarily on qualitative criteria, with quantitative indicators only being included in the overall evaluation in a differentiated and reflective manner. [2]
It was therefore only consequential that the DFG signed the “San Francisco Declaration on Research Assessment” (DORA) in 2021 [3] and, by doing so, committed itself to…
… not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions. [4]
This also emphasizes that when the DFG talks about “quantitative indicators”, they are not only referring to just the number of publications, but also to the reputation of the respective publication venues (“impact factors”). Although this had already been implied by the primacy of “scientific content” since 2010, it has now been publicly clarified once again with signing DORA.
The DFG’s 2022 position paper “Scientific Publishing as the Basis and Design of Scientific Assessment” [5] doubles down on these concepts:
The central task of the funders – and as such, of course, also of the German Research Foundation – is therefore to ensure that the evaluation of scientific performance is first and foremost based on the scientific content. The reputation of the publication venues and bibliometric indicators are therefore to be removed from the canon of official evaluation criteria, where they exist, and their practical use is to be minimized.
And to make it absolutely clear once again that this was exactly what the above quote from the 2019 Kodex intended to mean:
A focus on bibliometrically oriented assessment of scientific performance at the level of individuals sets incentives for behavior contrary to the standards of good scientific practice as defined by the DFG Kodex.
After all these developments, it was not surprising that the DFG also became a founding member of CoARA in 2022, which entails the following “Core Commitment”:
Abandon inappropriate uses in research assessment of journal- and publication- based metrics [6]
This multitude of documents and text passages serves to document that the guidelines and efforts of the DFG as an organization wrt research assessment have been very clear and consistent over the last 14 years. One could summarize them as: ”It is not in line with our concept of good scientific practice to count the number or reputation of publications or to place them at the center of research assessment.” It needs to be emphasized that this is a solidly evidence-based policy: both the reputation of journals and the number of publications correlate with unreliable science [7]. Thus, this development within the DFG over the last 14+ years did not arise out of some overregulatory zealots putting bureaucracy before science, but out of using the best available evidence in pursuit of the best possible scientific practice.
Study sections not included
The DFG is not alone in these developments. It is working within a pan-European phalanx of research organizations, whose perhaps greatest success to date has been to convince the EU Council of Science Ministers that there now is sufficient evidence to push ahead with a far-reaching reform of the scientific publishing system [8]. We live in times when one cannot praise such consistently evidence-based policy enough. What the DFG has achieved here is groundbreaking and a testament to their scientific excellence. The DFG is thus demonstrating that it is a progressive organization, spearheading good scientific practice, embedded in a solid evidence base and international cooperation. It is therefore completely understandable that applicants now would assume that their performance assessment by the DFG is no longer based on quantitative indicators, but on scientific content.
But it may seem as if the DFG did not quite anticipate the reactions of their panels? Or maybe the opposition to evidence-based research assessment described above was just an exception in a single, extreme study section panel? If an article in the trade magazine “LaborJournal” is anything to go by, such reactionary views may still be widespread among DFG panels. Last year, the DFG panel “Zoology” issued a statement in the LaborJournal that they do not feel bound by the DFG’s evidence-based guidelines [9]:
It is not easy to select the most suitable journal for publishing research results in a changing publishing system. We would like to share some thoughts on this, so that the expectations of decision-making bodies can also be included in the applicants’ publication strategy.
Even though it is not explicitly worded as such, all applicants nevertheless immediately understand that the considerations that follow these sentences imply instructions on where to publish in order to meet the expectations of decision-making bodies such as the “Zoology” panel. To avoid any misunderstandings, the “Zoology” panel disambiguates:
Therefore, you should mainly publish your work in journals that enjoy a good reputation in the scientific community!
Everybody knows what the words “good reputation” stand for. After all, several studies have found that precisely this “good reputation” correlates exceedingly well with the impact factor [10] that the DFG has committed itself to not using with DORA/CoARA – and which in turn correlates with unreliable science [7]. And as if to ensure that really everybody gets the message, the “Zoology” panel re-iterates again which journals they are recommending for applicants to maximize their chances of getting funded:
Therefore, publications in selective journals with a high reputation will continue to be an indicator of past performance.
At least the author of these lines is tempted to continue the sentence with: “… no matter what the DFG thinks, decides or signs”.
The panel managed to make it crystal clear to all applicants what they were referring to, without having to mention any red-flag words such as “impact factor” or “h-index” (plausible deniability). The term “dog whistle” was coined for such an approach: coding controversial statements such that the target audience understands exactly what is meant – without provoking opposition from anyone else.
Perhaps not surprisingly, the “Zoology” panel does not shy away from applying their recommendations in their funding decisions. For example, the panel rejected the first grant proposal from an early career researcher (ECR), although it agreed with the two reviewers that the project itself deserved funding. Among the reasons the “Zoology” panel listed for nevertheless rejecting the grant proposal, it cited the ECR’s merely “average publication performance” as the deciding factor, without dealing with the content of the publications and disregarding that the relevant time period not only included the Covid pandemic but also the ECR’s parental leave. It is hard to imagine a more pernicious demonstration that where and how much one publishes still remains an essential funding criterion for this panel, no matter the DFG guidelines.
These examples show that neither the “Neuroscience” nor the “Zoology” DFG panels find anything wrong with setting precisely the incentives the DFG finds so objectionable: “A primarily bibliometrically oriented assessment of scientific performance at the level of individuals sets incentives for behavior contrary to the standards of good scientific practice.” On the contrary, it appears as if none of the panels have developed any “mens rea” when it comes to quite openly – and in one case even publicly – violating long-standing DFG policies.
In science, the principle of Occam’s razor applies: “Entities must not be multiplied beyond necessity”, entailing that, e.g., of two otherwise equivalent hypotheses, the simpler one should always be preferred. Probably somewhat less well known is Hanlon’s razor, which similarly requires a decision between hypotheses: “Never attribute to malice that which is adequately explained by incompetence”. Could it be that the panels were simply unaware of the DFG’s guidelines and commitments? Is it possible that the last 14 years of developing these guidelines have passed the panels by without a trace? When I approached the DFG in this regard, the DFG employees I interacted with seemed slightly exasperated when they emphasized that of course all panels were thoroughly briefed before they start their work, that these briefings were a long-standing practice at the DFG and that they of course also included the rules for research assessment.
This seems to remove any last doubts: the DFG panels are all familiar with the guidelines and know that it does not correspond to the DFG’s concept of good scientific practice to count the number or reputation of publications. What then could possibly motivate some DFG panels to publicly take up a position directly opposite of the DFG’s established policies for research assessment? Ultimately, only the individuals on the panels can provide an accurate answer, of course. Until then, we can only speculate, but it would not be the first time that renowned researchers do not take it lightly when they are told what good scientific practice is or that their methods and views are outdated (see, for example, “methodological terrorists”, “research parasites” or “nothing in their heads” [11]).
Slow cultural change?
Following Occam and Hanlon, a straightforward interpretation of the above DFG panel behavior would be that some more reactionary panels simply do not accept that counting publications and reputation is now out of bounds due to DFG rules. If that interpretation were correct, the first round in such a power struggle would have gone to the panels. It seems the DFG is not willing (at least so far) to enforce their guidelines. One of the reasons I was given is that 14 years were too short a time frame and especially the DORA signature were only three years old. The expressed fear was that enforcement at such an early stage of the evaluation reform process could lead to a strong backlash from the panels, which the DFG wants to avoid at all costs.
In principle, the DFG would have ways and means of imposing appropriate sanctions for scientific misconduct by researchers or reviewers. The list of possible consequences is defined in the document “Rules of Procedure for Dealing with Scientific Misconduct” [12]. However, this document does not (yet?) list flouting DFG policies among the list of punishable actions. Perhaps the contents of this list ought to be re-considered? Maybe the fact that the DFG’s “Research Integrity Team” did not even consider a ‘preliminary examination’ in this case tells us something about the priority the DFG internally assigns to the reform of research assessment?
As one more reason justifying the lack of enforcement of their guidelines, the DFG’s “Scientific Integrity Team” stated that the panels who had made these decisions had just been dismissed after their terms had now ran out and that new panels had just been elected. This would provide the DFG with another opportunity to emphasize this topic during their briefings. Indeed, it was confirmed by members on these newly elected panels that the DFG specifically highlighted the assessment guidelines in the webinar training sessions for the new panels. The DFG is thus not complacent at all, but is opting for a slower, voluntary cultural change instead of effectively enforcing its guidelines.
While one can obviously sympathize with the DFG approach for any number of good reasons, for all applicants it means maximum uncertainty: Do the DFG guidelines apply, or don’t they? Should you continue to keep the sample sizes small and thus publish faster – or should you aim for the necessary 80% statistical power after all? Do you make an ugly data point disappear so that the Nature editor also likes the result – or do you publish honest data in the “Journal of Long Titles”? Do you continue to pursue salami slicing – or invest more efforts into making your science reproducible? Do you adjust the p-value downwards and only upload tabular data – or do you implement Open Science with full transparency? And anyway: which of the possible projects you could apply for is the one where you could squeeze the most publications out of?
For ECR applicants without permanent positions, these are all existential questions, and the answers are now completely open again. These ECRs are generally more affected by all negative evaluations than tenured professors – and are therefore particularly vulnerable to this form of uncertainty. For those among them who wanted to rely on the 14 years of DFG practice, including their commitments under DORA/CoARA, the above-mentioned bibliometric assessments and the article by the “Zoology” panel in the Laborjournal must seem like an open mockery of good scientific practice. Maybe the applicants whose grant proposals will be judged by the panels mentioned here ought to take a close look at the lists of “Questionable Research Practices” [13], because although they increase the likelihood of unreliable science, they promise more and higher-ranking publications – as required by these panels.
What message is the DFG sending to the future generation of researchers in Germany when it leaves such obvious non-compliance unanswered – and instead just hopes that the new panels might show a little more understanding for evidence-based policies or at least be a little bit more amenable to DFG webinars? What is the consequence if the goal of a research project is no longer to answer a scientific question, but rather to obtain a maximal number of publications and their highest possible ranking?
There is also the question what the organizations behind DORA and CoARA have to say about all this. When I asked them, both organizations indicated that the panels seemed indeed to have run afoul of DORA/CoARA agreements, but that they did not have the resources to investigate individual cases. Their resources were just enough to promote the reform of research assessment and to take care of members. The review and enforcement of the voluntary commitments had to be taken over by someone else.
Rewarding unreliable science
Everyone knows that voluntary commitments are useless if their violations remain without consequence. The behavior of the DFG panels is just one example of such toothless commitments. How will disillusioned researchers, who may have had high hopes for DORA/CoARA, react if the institutions’ voluntary commitments ultimately turn out to be mere signal politics without consequences? The efforts to modernize research assessment, as described above, are based on the evidence that the race to submit more and more publications to the highest-ranking journals rewards unreliable science and punishes reliable science [7]. The elimination of the number of publications and the reputation of the journals from research assessment, the logic goes, would also eliminate significant drivers of unreliable science.
Ultimately, the aim of research assessment reform is to instill in authors and applicants the certainty that reliable science will now be rewarded – regardless of where and how much they publish. However, if the DFG does not soon win the panel lottery, the fear remains that eventually nobody feels bound by any such policies or obligations anymore. The resulting uncertainty among authors and applicants would completely undermine the international efforts to reform research assessment, at least in Germany. In this case, the only risk-averse strategy remaining for authors and applicants were, then as now, to publish as much as possible at the highest possible rank – with all the well-documented consequences.
It has been almost 10 years now that we have come to the realization that a particular type of our operant experiments can be classified as motor learning. In such “operant self-learning” experiments, the animal learns about the consequences of its own behavior and adjusts future behavior accordingly. In this experiment, a tethered fly, flying stationarily in an otherwise featureless environment, is trained to avoid/prefer one of two turning directions. The fly is tethered to a torque meter, measuring the angular momentum around the vertical axis of the fly, corresponding roughly to left or right turning attempts. A punishing heat beam provides feedback to the fly as to which attempted turning direction (torque) it is trained to prefer/avoid:
Fig. 1:An infrared heat beam provides feedback for the tethered fly in Drosophila yaw torque learning, a motor learning task. A fly, tethered to a torque meter, will spontaneously attempt various flight maneuvers, even if its environment is spatially homogeneous and temporally static. The torque meter measures the angular momentum around the vertical axis of the fly, corresponding approximately to left (A) and right (B) turns, respectively. An infrared laser (red line in A), focused from behind on the fly, provides negative feedback for one of the two turning directions (in this example left turns). Half of the flies are punished for left turning attempts and half for right turning attempts.
At this year’s Society for Neuroscience meeting in Chicago, Illinois, we will present two posters revolving around this experiment. Both posters will be presented in the same session, on Sunday morning, October 6, 8am-noon, poster boards O11 and O28.
One of the posters, presented by Dr. Radostina Lyutova, will present progress on our long-running project in which we have discovered many different ways in which we can improve the flies’ learning ability: we have mutant flies, transgenic flies with up- or downregulated genes or silenced neurons – they all have in common that the self-learning process is enhanced, i.e., the learning requires less training than in the unmanipulated control flies. Click on the image below to download the poster:
I will present the second poster, which contains evidence that the neurons that are modified by the operant training are motor neurons in the ventral nerve cord of the animals. To be specific, it looks as if the plasticity is happening in the motor neurons innervating the steering muscles that set the wing position during turning maneuvers:
If you want to learn more about our research, come visit us at the posters on Sunday morning in Chicago!
Like this:
LikeLoading...
Posted on October 1, 2024at 17:28Comments Off on Motor learning at #SfN24
In a discussion about what decisions are, John Krakauer emphatically pronounced that “decisions happen for reasons”, in answering ‘no’ to my question if it wasn’t a decision with which foot to start walking from a stand-still.
A recent article from the laboratory of Carolina Rezaval in Birmingham studied a decision-making process in male Drosophila fruit flies where the reasons for each decision seemed apparent. When a male fly was presented with a looming stimulus (which mimics a threat from a nearing predator) while in the early stages of courtship, it stops courting and either freezes or tries to escape. In later stages of courtship, however, the same threat stimulus is no longer able to interrupt courtship and elicit escape. While it is less clear if the male fruit fly is able to articulate them, the reasons for him to behave in that way seem obvious: in the early stages of courtship, the male escapes as it is still not clear if the courtship attempts will be successful. In a later stage, once sufficient information has accumulated about the receptivity of the female and/or copulation is close, it’s worth trading some safety for the prospects of getting one’s genes into the next generation. In biology, such reasons are commonly called “ultimate causations” of the behavior, while the neurobiology by which they are implemented are the “proximate causations”. The recent article concerned the proximate causations of these decisions in Drosophila.
In the early stages of courtship, the authors found, a class of visual neurons in the fly’s optic lobe called LC16 detect and respond to the threat stimulus. Via several other neurons, these LC16 neurons connect to P1 neurons in the fly brain, which collect multimodal information (e.g., female sensory cues) and can mediate courtship behavior. If LC16 neurons are activated by a threat stimulus, downstream neurons release serotonin, which inhibits the P1 neurons, halting courtship. This inhibition between different behavior circuits is a common theme observed in many other preparations, preventing two behaviors occurring at the same time. In this case, LC16 neurons mediate escape behavior and inhibit other behaviors, such as courtship, which could interfere with the escape.
The fact that LC16 no longer inhibits courtship at later stages implies that there is more to this story than the simple inhibition known from so many other experiments with different animals. In the course of courtship, as the duo gets closer to copulation, a population of dopamine neurons in the male slowly ramp up their activity. These dopamine neurons make direct synaptic connections with LC16 neurons and inhibit them. In other words, the lack of escape is not due to a failure of LC16 to inhibit courtship, but due to a failure of LC16 to respond to threat stimuli.
Taken together, much like what was known from other preparations, there is a mutual inhibition between circuits mediating different behaviors: escape behavior circuits inhibit courtship circuits and courtship circuits inhibit escape circuits. Until now, such mutual inhibition was mainly studied in central-pattern-generator driven behaviors such as swimming/feeding in molluscs (cited by the authors). In these cases, the inhibition was identified very close to the motor side of the circuits. In the case of male Drosophila, the relation of the neuronal processes appears much closer to sensory input: Both LC16 and P1, while not being sensory neurons, respond to sensory stimuli: LC16 to visual threat and P1 to female sensory cues. Their inhibition can be interpreted as a lack of sensation. The authors themselves do not seem to have made up their minds about this. While they start their article by emphasizing the animals’ “decisions that require balancing opportunities and risks”, title and abstract of their work invoke more attention-like processes by using words such as “love-blind”. Which is it, decision or attention?
Is the decision to court or to escape really a decision, where the “risks and opportunities” are weighed and then one of the two options is chosen, or is this just an example of sensory competition, where eventually one “blinds” the other, such that the losing stimulus is simply not perceived? If only one of the two options is perceived, it can no longer be a decision, can it? As neither LC16 nor P1 are directly involved in sensing the stimuli, it is fair to assume that both the looming stimulus and the female sensory cues are processed appropriately even if any of the two classes of neurons are inhibited. However, if the perceived looming stimulus is not longer assessed as a threat, or the female sensory cues no longer perceived as sexually attractive, aren’t the animals then just simply responding to the single remaining stimulus, without needing any reasons? As an aside: if so, what are the ‘losing’ stimuli perceived as?
The flies cannot tell us what they perceive and how, but this is not the first time an animal’s behavior that outwardly looks, sounds and smells like a decision, loses these properties upon neuronal inspection. Leeches commonly respond with local bending of their body wall to mechanosensory stimulation such as light touches. Increase the intensity of the touch a little, and the animal will start crawling away from it, and with further increases, the animal will start swimming to escape. However, when the animal is feeding, not even the strongest mechanosensory stimuli can get the animal to do swim or crawl away. The reasons for this decision are clear: if the leech has itself attached to the animal it is sucking blood from then this animal is likely moving around in the pond or stream where the leech lives. Given the common environments for this leech species with plenty of vegetation in the water, the moving animal will likely touch an number of obstacles and chances are that these obstacles will also touch the leech. If it were to start crawling away or even swim away in that phase, it would not be able to obtain a sufficient blood meal for survival and procreation. So the leech will literally hold on for dear life until it has had a sufficient meal.
Decades of research have provided a pretty good understanding of the circuitry for the processing of mechanosensory stimuli in the leech. Way back in 2009, Gaudry and Kristan reported on the neuronal mechanisms underlying the decision of feeding leeches to ignore mechanosensory stimuli. Analogously to the male flies where a sufficiently progressed courtship inhibits the transmission of threat stimuli, feeding in the leech also leads to the release of a biogenic amine, in this case serotonin. Serotonin then inhibits transmitter release from the mechanosensory neurons. Conceptually very similar to the male flies, the sensory stimuli are still perceived by the relevant sensory neurons, but the transmission of the signal is blocked via central processes. In male flies, dopaminergic neurons active during courtship inhibit threat transmission during courtship, in the leech, serotonergic neurons active during feeding inhibit mechanosensory transmission during feeding. In the case of the leech, the authors take an unambiguous position: Title, abstract and text all call the inhibition of responses to mechanosensory stimulation during feeding a ‘choice’ or ‘decision’.
In both flies and leeches, their behaviors were tied closely to sensory stimuli: looming stimuli, female cues, blood, touch. It is precisely because of these stimulus situations, that we can articulate the reasons for the decisions the animals make. The animals’ decision appear very “reasonable” for the human observer. However, the two examples above are just two of a growing list of examples where this very intuitive sense of the behavior being a “decision” starts to disappear with our understanding of how the neurons are doing it. If one of the two stimuli the animal has to decide about gets shut out so that only one remains, where is the decision?
A classic example of a decision in humans is “red or white wine?” If human nervous systems would mediate such a decision analogously to flies and leeches, the decision would be that between, say, an empty glass and a glass filled with wine – very different from how we envisage and experience such decisions. But then again, the decisions studied in the examples above are life-and death decisions for the animals. Everybody knows that decisions about loved-ones are clouded by our love – the world “love-blindness” exists for a reason. Everybody has experienced products in a grocery store becoming much more attractive when we are hungry. It is conceivable that our minds similarly bend our sensory experiences when in a life-threatening situation. If the animal examples were anything to go by, decisions with reasons would more like reactions to stimuli and less like actual decisions?
One can think of two arguments why studying the neurobiology of decisions without reasons may actually be a more fruitful endeavor for understanding decision-making, one conceptual and one practical.
Conceptually, as long as we haven’t understood the whole process, we can never be sure how much choice any human or animal ever has in a given situation. We simply cannot know how coercive, ahem, compelling the reasons for the decision actually are. Practically, it is also not the most efficient way to just study one decision after another until one happens to stumble upon one where the ‘reasons’ aren’t really just competing stimuli eliciting responses where, ultimately, one wins in a very deterministic way. And even if one finds such a lucky situation, as is seems to be the case with photopreference in flies, even then the practical problem remains, that without complete knowledge of sensory processing, it is exceedingly difficult to know if one’s manipulations are actually affecting the decision-making process itself or just some aspect of sensory processing.
A decision involves that there actually are at least two real options. In the animal examples above, it looks, to the observer of the behavior, like there were two such options, but once the neuronal mechanisms mediating the ‘choice’ have become clear, one of the options was gone. This is not the case when studying decisions that have no reasons at all. Under such circumstances, the decision is always 50-50. The outcome is, if the experiment is properly designed, unpredictable and happens exclusively in the animal and cannot be dictated by external stimuli. There are many such examples: isolated buccal ganglia of the sea slug Aplysia produce different motor programs in a petri dish, without any sensory organs. Analogously, isolated leech nervous systems start and stop producing swimming motor programs in the dish. Tethered fruit flies produce different flight maneuvers even if flying stationarily inside the center of a ping-pong ball. In these cases, there are no obvious, outwardly discernible ‘reasons’ for the animals/nervous systems to pick one behavior over another. The reasons for the decision come solely from within the animal itself. Conceptually, these situations appear much closer to what we call “decisions” and practically, they avoid the sensory confounds altogether.
These two kinds of decisions have traditionally been referred to as “picking” and “choosing” in a 1977 seminal paper by Ullmann-Margalit and Morgenbesser.
So what really is a decision and how do we study them in neuroscience? Picking or Choosing?
In human psychology, there is a distinction between “judgments” and “decisions” (i.e., the judgment and decision-making, JDM, framework):
‘Judgments’ refer to how individuals acquire and process information to arrive at an understanding of the situation or state of the world. ‘Decisions’ refer to the processes by which individuals use judgments to arrive at a course of action. In other words, individuals can vary in two main ways: how they ‘see’ the world, and what they decide to do about it.
It appears that the leech and fly work cited above merely concerns the “judgment” aspect of JDM, leaving the decision-making untouched. In this JDM framework, does it not seem wise for neuroscience experiments to circumvent the practical problems of judgments by studying decisions that happen in the absence or at least with equivalence of sensory stimuli?
Like this:
LikeLoading...
Posted on September 6, 2024at 13:47Comments Off on What is a decision?
I was very excited when our latest research paper came out, after all, I was confident our 30-year-long search for the sites of plasticity in the form of motor learning we study was coming to an end. In this work, we were fairly confident that underlying the type of learning we study was a novel form of plasticity in a very specific set of motor neurons in the ventral nerve cord of the flies we use for our research. The reason for our confidence was the convergence of several lines of genetic evidence:
inhibiting all PKCs (protein kinase C) in motor neurons abolished motor learning
knocking-out just aPKC (atypical PKC) in motor neurons also abolished motor learning
knocking out aPKC in neurons that also express the gene FoxP abolished motor learning (and FoxP is also expressed in motor neurons and itself necessary for motor learning)
knocking-out FoxP outside of the ventral nerve cord had no effect
In addition to these genetic lines of evidence, we looked at the expression patterns of the two genes we know are required for motor learning: aPKC and FoxP. #3 in the list of genetic evidence above suggests that aPKC is required in FoxP neurons, so we looked for neurons in the fly nervous system that co-express both aPKC and FoxP. The first finding was that there don’t seem to be any such co-expressing neurons in the brain, just in the ventral nerve cord. We (and by ‘we’ I mean our collaborator in Mainz, motor neuron expert Carsten Duch) identified some of those co-expressing neurons in the ventral nerve cord to be the steering motor neurons that control the muscles that position the wings during turning maneuvers. This last bit made us all excited: in our motor learning task, we train flies to make specific turning maneuvers, i.e., left or right turns, respectively. In our experiments, before we train the animals, they make just as many left turns as they make right turns. Then we train them to just do one kind of turns, such that, after training, the flies preferentially turn into the, say, right direction, if that was the side they’ve been trained to do. Finding that the motor neurons involved in these kinds of behaviors also expressed both aPKC and FoxP supported the genetic evidence that these neurons may be where the plasticity underlying the learning may reside. However, there aren’t the genetic tools available at the moment (they are being created as I type this) to specifically manipulate plasticity in just these neurons and no other neurons, so while all the evidence was pointing towards them, we couldn’t be quite sure, yet. There may be other neurons involved that we just didn’t have on our radar, yet.
There is another behavior that also involved these steering motor neurons and that is the “optomotor response” (OMR). Almost all animals show OMRs to moving stimuli: if you sit in a car or train and look outside to the passing landscape, your eyes will show the typical rapid movements (nystagmus) when trying to stabilize the moving image on your retina. Other animals move not just their eyes, but also their heads to stabilize the image, again others move their eyes, heads and sometimes also the rest of the body to, say, a rotating stimulus around them (or if they are rotated). Flies flying stationarily at a tether presented with vertical stripes moving around them either towards the left (i.e., counterclockwise, if seen from above) or right (clockwise, from above), respectively, will try to follow the stripes and generate turning maneuvers in the same direction: the optomotor response. It’s known that in order to execute these elicited turning maneuvers, there are neurons in the brains of the animals that send the information about the rotating stimuli directly to the steering motor neurons, bypassing all other neurons on the way. This means that they also bypass those neurons that are used to generate the spontaneous turning maneuvers we train the flies to perform in our motor learning paradigm.
Essentially, the only part of the nervous system, where the OMRs overlap with the behaviors we train are these steering motor neurons. This means that if these steering motor neurons are modified by our motor learning, as all the evidence described above suggests, we should se some modulation of the OMR after motor learning. Our paper shows the data where exactly such a modulation can be observed. So with this data we were fairly convinced that the plasticity we have been searching for in all these decades was indeed in the steering motor neurons. But we weren’t quite certain, yet. What was missing? Because we had so far only observed the modulation of the OMR in wild type animals, we didn’t know if the manipulations of the FoxP and aPKC that abolish motor learning would also abolish the modulation of the OMR. Now we finally also have that piece of the puzzle!
We again used the CRISPR/Cas9 technique to knock out aPKC on FoxP-positive neurons. These flies were first tested for their OMR before training, then subjected to motor learning of either left or right turning maneuvers, respectively, and finally tested for their OMR again. The genetic control flies that carried the same genetic constructs, but in a dysfunctional way so aPKC expression would not be affected were subjected to the same OMR-ML-OMR regime. The initial OMR between both groups was identical. The motor learning phase showed the same defiicit as we had observed with these flies in our paper:
The two control groups (blue and green) show a preference for the unpunished turning direction that is significant both in frequentist (Wilcoxon) and Bayesian (Bayes Factor, bf) statistics against zero (no preference), while the aPKC knock-outs (yellow) show no such preference. We replicated our finding that knocking-out aPKC in FoxP-neurons abolishes motor learning.
To compare the controls with the knock-outs, we pooled the controls and separately analyzed the OMR of those flies that were punished for left-turning maneuvers and those that were punished for right-turning maneuvers, respectively:
The control flies (above) show the modulation of the OMR that we had already seen in wild-type flies: animals punished for left turns showed a reduced OMR for left-turning stimuli and vice versa for flies punished for left turns. Quantifying this asymmetry shows us that flies punished on left turns show an asymmetry shifted to the right (i.e., positive values), compared to flies punished on right turns, who are relatively more negative (i.e., shifted to the left):
In contrast, this quantification in the knock-outs looks a lot more symmetrical:
If one now takes this measure of asymmetry and changes the signs such that positive doesn’t mean “right turns” any more, but “unpunished turning direction”, then we can compare the OMR modulation directly to the trained turning preference in a single figure:
The pooled genetic controls (A) show both a significant OMR modulation (blue, left) and a significant turning preference (yellow, right), while the aPKC knock-outs (B) show neither.
So with this data, we are now really quite sure that plasticity in steering motor neurons is one way in which this type of motor learning manifests itself in the nervous system.
Like this:
LikeLoading...
Posted on July 26, 2024at 12:27Comments Off on Whodathunk? Motor learning in motor neurons, huh?
We are looking for a PhD student interested in the functional, molecular and structural profile of neuronal circuits underlying learning, memory and behavior. In a 30-year research effort (lay summary, paper), we have recently identified a new gene (atypical PKC, aPKC) necessary for a form of motor learning in the fruit fly Drosophila and in which neurons it is required. The prospective PhD student will use molecular tools to identify potential interaction partners of aPKC and then use behavioral experiments as well as confocal microscopy techniques in combination with transgenics to validate and functionally characterize the role of the candidate genes.
The candidate:
The ideal candidate has a Master’s degree in a relevant field, experience in Drosophila husbandry and standard molecular cloning techniques, as well as some coding proficiency. A solid command of the English language is also important.
The position:
As is commonplace for Germany, this will be a three-year project, funded by a DFG 65% position, i.e., about 1,900€/month after tax and with full benefits, membership in our graduate schools and all the usual bells and whistles that comes with such a position in Germany. There are no lectures to attend or rotations to adhere to – just 100% of pure, unadulterated research fun. We will provide training in behavioral experiments, confocal microscopy and open science.
Our research:
Trial and error is a successful problem-solving strategy not only in humans but throughout evolution. How do nervous systems generate novel, creative trials and how are errors incorporated into already existing experiences in order to improve future trials? We use a variety of transgenic tools, mathematical analyses, connectomics and behavioral physiology to understand the neurobiology of spontaneous behavior, learning and adaptive behavioral choice.
Our lab:
We are an open science lab that prioritizes inclusion and diversity to achieve excellence in research and to foster an intellectual climate that is welcoming and nurturing. We are based at the University of Regensburg, an equal opportunity employer with over 20,000 students and more than 1,500 faculty, in Regensburg, Bavaria, Germany. Regensburg is an incredibly nice city with a high quality of life. Affordable, safe, cultural, civil, great local food, and close to other great cities like Prague or Munich.
Please send your application with your CV and a short, one page letter of motivation to my institutional address (bjoern.brembs@ur.de). Applications will be considered until the position is filled, but applications before June 1, 2024 will receive preferential treatment.
Like this:
LikeLoading...
Posted on March 22, 2024at 14:18Comments Off on We are looking for a PhD student
A few years ago, I came across a cartoon that seemed to capture a particular aspect of scholarly journal publishing quite well:
The academic journal publishing system sure feels all too often a bit like a sinking boat. There are many leaks, e.g.:
– a reproducibility leak – an affordability leak – a functionality leak – a data leak – a code leak – an interoperability leak – a discoverability leak – a peer-review leak – a long-term preservation leak – a link rot leak – an evaluation/assessment leak – a data visualization leak etc.
A more recent leak that has sprung up is a papermill leak. What is a ‘papermill’? Papermils are organizations that churn out journal articles that are made to look superficially like research articles but basically only contain words without content. How big of a problem are papermills for science?
In many fields it is becoming difficult to build up a cumulative approach to a subject, because we lack a solid foundation of trustworthy findings. And it’s getting worse and worse.
The article states that something on the order of 10,000 articles a year being produced by papermills poses a serious problem to science. These numbers most certainly are alarming! The article also cites Malcolm Macleod:
If, as a scientist, I want to check all the papers about a particular drug that might target cancers […], it is very hard for me to avoid those that are fabricated. […] We are facing a crisis.
OK, challenge accepted, let’s have a look at cancer research, where the reproducibility rate of non-papermill publications is just under 12%, so we’ll round it to that figure. PubMed lists about one million papers (excluding reviews) on cancer in the last 5 years:
If the sample result of 12% were representative, this would mean that the last 5 years in cancer research produced about 880,000 unreliable publications, or about 176,000 per year. And that’s just cancer. Let’s also pick psychology, where replication rates were published as 39% in 2015. 2015 is a long time ago and psychology as a field really went to great lengths to address the practices giving rise to these low rates. Therefore, let’s assume things got better in the last decade in psychology, so after 4-5 years, maybe 50% replication was achievable. Searching for psychology articles yields about 650,000 non-review articles in the last 5 years:
This amounts to about 65,000 unreliable psychology articles per year.
So according to these very (very!) rough estimates, just the two fields of cancer research and psychology together add more than a million unreliable articles to the literature every five years or so. Clearly, those are crude back-of-the-envelope estimates, but they should be sufficient to just get an idea about the orders of magnitude we are talking about.
If the numbers hold that about 2 million articles get published every year, just these two fields would together amount to a whopping 10% of unreliable articles. Other major reproducibility projects in the social sciences and economics yield reproducibility rates in these fields of about 60%. Compare these numbers to a worst case scenario of all papermills together producing some 10k unreliable articles a year. If the scholarly literature really were a sinking boat, fighting papermills would be like trying to empty the boat with a thimble, or plug the smallest hole with a cork.
It was my freshman year, 1991. I was enthusiastic to finally be learning about biology, after being forced to waste a year in the German army’s compulsory service at the time. Little did I know that it was the same year a research paper was published that would guide the direction of my career to this day, more than 30 years later. Many of the links in this post will go to old web pages I created while learning about this research.
The paper in question contained two experiments that seemed similar at first, but later proved dramatically different. The first one was conceptually most simple: a single Drosophila fruit fly, tethered at a torque meter that measures the force a fly exerts as it attempts to rotate around its vertical body axis (i.e., trying to turn left or right), controls a punishing heat source. For instance, attempting to turn to the left switches the heat on and attempting to turn to the right switches the heat off. There is a video describing the way this experiment was set up at the time:
In the paper, my later mentors Reinhard Wolf and Martin Heisenberg described how the flies learn to switch the heat on and off themselves and how they, even after the heat is permanently switched off, maintain a tendency to prefer the turning directions that were not associated with the heat. The “yaw torque learning” experiment was born. Quite obviously, yaw torque learning is an operant conditioning paradigm, as the fly is in full control of the heat.
In another experiment in the same paper, the flies control the angular position of a set of black patterns around them with their turning attempts, pretty much like in a flight simulator (also described in the video above): whenever the fly attempts to turn, say, left, the computer rotates the patterns to the right, giving the fly the visual impression of actually having turned to the left. There are two pairs of alternating patterns arranged around the fly and one set is associated with the same punishing heat beam as in yaw torque learning, such that the fly can learn to avoid flight directions towards these patterns in this “visual pattern learning” experiment.
Like yaw torque learning, visual pattern learning appears to be an operant experiment, as the fly controls all stimuli. However, this conclusion may be premature, as the flies may just learn that one of the patterns is associated with heat, just as the Pavlovian dog learns that the bell is associated with food. Wolf and Heisenberg addressed this question by recording the sequence of pattern rotations and heat applications from one set of flies and playing it back to a set of naive flies. If the Pavlovian association of patterns with heat in the “replay” (or ‘yoked’) control experiment alone was sufficient to induce the conditioned pattern preference, the operant behavior of the flies would just be a by-product of an essentially Pavlovian experiment. However, there was no preference for a pattern in the “replay” flies, so visual pattern learning in the Drosophila Fight Simulator is still an operant experiment at its core – despite the conspicuous ‘contamination’ with a Pavlovian pattern-heat contingency.
In the course of my early studies, I was entirely oblivious of this research, until in 1994 I took a course in Drosophila Neurogenetics where I learned about these two experiments. I remembered Pavlov’s classical conditioning experiments from High School, as well as operant conditioning in Skinner boxes. Both Pavlov and Skinner having been dead for some time, I thought the biological differences between operant and classical conditioning must be well known by 1994, so I asked Reinhard Wolf during the course, if he could explain to us the biological differences between operant and classical conditioning: what genes were involved and in what neurons? To my surprise, he answered that nobody knew. He said there was some genetic data on classical conditioning and some neurons in a brain area called “mushroom bodies”, but for operant learning, nobody knew any biological process: no genes, no neurons, nothing. I was hooked! One “nobody knows” reply to a naive undergraduate question was all it took to get me set up for life. I felt that this was a fascinating research question! What are the neurobiological mechanisms of operant learning and are they any different from those of classical learning?
The following year I started working towards answering this question in my Diploma thesis (Master’s thesis equivalent). It seemed to me that to be able to tackle that question, I first needed to understand what “the operant” actually was, so I could study its mechanisms. To get closer to such an understanding, I collected a large dataset of 100 flies in each one of four experimental groups: One group was trained in visual pattern learning as described above. Another group was trained in a Pavlovian way, such that the patterns were slowly rotated around the fly such that each pattern would take 3 seconds to pass in front of the fly, such that the heat would be on for three seconds with one pattern in front of the fly and 3 seconds heat-off when the other pattern was passing before the fly. The two remaining groups received the same treatment as the two first groups, but without any heat. Both classical and operant experiments were set up such that the pattern preference after training, tested without heat, was of about the same magnitude. To achieve this, the classical experiment had to be set up that the flies received multiples of the amount of heat that the flies in the operant experiment would receive (i.e., the 3s heat-on/3s heat-off procedure). I wondered why that had to be this way? Why did operant training require much less heat than classical? I hypothesized that the operant flies may learn specific behaviors that get them out of the heat quickly or that enable them to avoid the ‘hot’ patterns more efficiently. To test this hypothesis, I fine-combed the behavior of the flies with a myriad of different analyses I coded in Turbo Pascal – to no avail. I could not find any differences in the behavior of the flies that would explain why the operant flies needed so much less heat than the classical ones. Despite there being two differences in the classical and the operant setups, i.e., more heat and no operant control in the classical experiments, there didn’t seem to be any major difference in the animals’ behavior. Obviously, I may just have missed the right behavioral strategy, but lacking any further ideas where or how to look for them, I cautiously concluded that the operant flies may somehow learn their conditioned pattern preference more efficiently when they are allowed to explore the patterns with their behavior, as opposed to slower learning when the patterns were just passively perceived – some kind of “learning by doing” maybe? Heisenberg’s pithy comment from those days is still stuck in my mind: “This is a genuine result, but at the same time, the world is full of non-Elephants.”
Despite the negative results, I enjoyed this type of research enough and found the research question so exciting that I wanted to continue in this direction. I decided to do my PhD in the same lab with Martin Heisenberg as my advisor and I was lucky he had a position available for me, so I started right after I had handed in my Diploma thesis in 1996.
In my ensuing PhD thesis, I tried to further come to grips with the fact that operant visual learning seemed to be so much more efficient than classical visual learning. My first approach was to eliminate one of the two differences in my previous Diploma work, the amount of heat the classically training animals received. I wanted the only difference between the experiments to be “the operant”, i.e., the operant control over the stimuli. I started by turning towards the “replay” experiment, where the flies passively perceive the same pattern/heat sequence that was sufficient for the active, operant flies to learn their conditioned pattern preference. But in this experiment, the passive, “replay” flies (i.e., the classically trained ones) did not show a preference, so I couldn’t really compare them with the operant flies that did show a preference. Why did the “replay” flies not learn? After all, the patterns were associated with the heat and this association was sufficient for the operant flies to learn. It turned out that by doubling the “replay” training, the “replay” flies started to show some preference, but much weaker than the operant flies. In this experiment, the only difference between the two groups of flies is the operant control, everything else is exactly identical. Together with the data from my Diploma thesis, this prompted the hypothesis that the animals may really just be learning that one pattern is “bad” (i.e., associated with heat) and the other “good”, irrespective of whether the animals learned this operantly or classically. The only difference between the operant and the classical experiment seemed to be that operant was much more effective than classical, but in all other aspects, there didn’t seem to be a difference between operant and classical learning. Could it be possible that at the heart of operant visual learning lies just a genuinely Pavlovian association between pattern and heat?
One of the hallmarks of a truly Pavlovian preference is that classically conditioned animals are able to express their preference of, in our case the ‘cold’ pattern, with any behavior, e.g., they should approach the ‘cold’ pattern and avoid the ‘hot’ whether they are, say, walking or flying. After much fiddling around with the setup (with the help of the mechanical and electronics workshops!), it turned out that to test this hypothesis, for technical reasons, I needed to combine the yaw torque experiment with the pattern learning experiment and replace the patterns with colors. The outcome was an experiment in which the flies controlled both colors and heat via their left/right choices, e.g., left turning yielded blue color and heat off, while right turning leads to green color and heat on. “Switch-mode learning” was the informal name for this procedure. Very long story short: it turned out that there really is a genuinely Pavlovian association formed in such switch-mode learning. Flies that learn that, e.g., green is good and blue is bad, can avoid the bad color and prefer the good color in a different behavior than the one that they used to operantly control the colors with. This means that there may be a fundamental difference between the operant yaw torque learning and operant visual learning: In the operant visual experiment, the operant behavior is important, but it does not play any role in what is being learned, only how. It doesn’t seem to enter into any association at all. Instead, it just seems to facilitate the formation of an essentially Pavlovian stimulus-stimulus association. In contrast, in the yaw torque learning experiment, there isn’t anything else that the animals could possibly learn, but their own behavior. In a way, yaw torque learning is a ‘pure’ form of operant learning, while operant visual (i.e., pattern/color) learning is ‘contaminated’ with a very dominant Pavlovian component. Both are operant experiments, but what the flies are learning looked likely to be dramatically different. Would that difference also affect the biology underlying these learning processes?
In the light of these results, I concluded my thesis work with a study on some classic (pardon the pun) Pavlovian phenomena. I had mixed feelings towards my achievements so far. On the one hand it felt like I got a bit closer to understanding the commonalities and differences between operant and classical learning, but I certainly hadn’t been able to find any genes or neurons involved in operant learning. Some reviewers of the work emphasized this shortcoming.
After graduating in the year 2000, I moved for my postdoc to Houston, Texas to study another form of ‘pure’ operant conditioning in an animal where we could get access more easily to the neurons that are involved in the learning process, the marine slug Aplysia. There I learned how the biochemical properties of a neuron, important for deciding which behavior will be generated, change during operant learning and how this leads to more of the rewarded behavior. This work was like a booster to my initial curiosity about the biological mechanisms of operant learning. More than ever before I felt that it now was high time for new approaches to discover which genes work in which neurons in Drosophila operant learning.
As mentioned above, for Drosophila classical conditioning, several learning mutants had been isolated decades earlier. After moving from Texas to Berlin in 2004, we tested them in our “switch-mode” operant color learning experiment. Consistent with the idea developed in my PhD work that such operant visual learning is essentially just an operantly facilitated Pavlovian learning task, some of these mutants are also defective in operant visual learning – likely because the Pavlovian ‘contamination’ is so dominant, as the previous experiments had suggested. How would these mutants fare if we took the colors away such that the ‘contaminated’ “switch-mode” became ‘purely operant’ yaw torque learning? To my surprise, the mutant flies did really well! The first of these experiments weren’t actually done by me, even though they had been on my to-do list for years by then, but by an undergraduate student – I only replicated them a few months later. These results made it unambiguously clear that there really was more than a conceptual difference between operant yaw torque learning and operant stimulus learning: there was a very solid biological difference. If there was a stimulus to be learned, i.e., a “Pavlovian contamination”, then Pavlovian learning genes were involved, but once that contamination was removed, Pavlovian learning mutants did just fine in the resulting ‘pure’ operant learning task. While this work was done with the mutant rutabaga, which affects a cAMP synthesizing enzyme, the results from a different gene were even more surprising: flies where the function of the protein kinases of the “C” type (protein kinase C or PKC) were inhibited, behaved in exactly the opposite way: they did fine in visual learning but failed in the ‘purely operant’ yaw torque learning task. This work took four years and in 2008 we published that we had found a gene family specific for operant learning, PKC.
So by that time, some 14 years later, I had a first answer for the initial question that got me started in the first place: there is a biological difference between operant and classical learning and you only see it if you remove all “classical contamination” from the operant experiment. Now that we had a gene family, what we needed next was one or more (as many as possible, really) individual genes and the neurons they are expressed in. It turned out to be quite difficult to find out which of the six PKC genes in Drosophila is the one important for yaw torque learning. Julien Colomb, a postdoc who had joined me in Berlin, used both mutants and RNAi approaches but was only able to rule some of the PKCs out, but did not find the crucial one. Things looked a bit better on the front where we tried to identify the neurons: Whichever PKC it was, it was apparently important in motor neurons. That may not sound so odd, after all, we are conditioning a behavior and motor neurons control the muscles for the behaviors. But these motor neurons were located in the ventral nerve cord (the “spine” of the insects) and we had thought operant conditioning was something that needed to involve the brain. So while the results were rather clear, I nevertheless didn’t value them sufficiently, as I was convinced the brain was more important than the ventral nerve cord, no matter what these experiments tried to tell me. There probably was a good explanation for these results, I thought then, once we find out what is really happening in the brain. The results were what they were and we published them in 2016.
While these experiments were ongoing, another candidate gene had appeared on the horizon, the transcription factor FoxP. The background for this candidate goes back to 1957 and the book “Verbal Behavior” by BF Skinner. In the book, Skinner claimed that the way we learn language looked an awful lot like operant learning: trying out phonemes until auditory feedback tells our vocal system that we have indeed said the words we wanted to say. This seemed like rats trying out behaviors in a Skinner box until they have found out how to press the lever for food. Or, I thought, how flies try out behaviors until they have found out how to control the heat with their yaw torque. While these all really did seem to look very analogous on the face of it, this view was shredded already in 1963, in a book review by Noam Chomsky, then a young scholar. But this critique not only helped catapult Chomsky to fame, it was also one of the starting points of what later was called the “Cognitive Revolution”. One of Chomsky’s most hard-hitting arguments was that the analogy was simply a false analogy and that Skinner had not provided any real evidence at all. A few years before his death, Skinner acknowledged the criticism and agreed he did not have any evidence other than the observed superficial parallels. In 2001, a gene was identified that would bring this classic academic feud back into public focus. A mutation in the human transcription factor FOXP2 was identified as the cause for a very specific speech and articulation impairment, verbal dyspraxia. A few years later, one of my colleagues in Berlin, Constance Scharff and her team knocked down the same gene in their zebrafinches and got an analogous phenotype: the birds had trouble learning their song. By then I was electrified! If Skinner had been right and Chomsky wrong, then there was a chance that FoxP really may be an operant learning gene. In 2007, I asked Troy Zars (who died way to soon some years later) at a meeting if he knew whether flies had a FoxP gene at all. It turned out he already had a set of mutants in his lab and was willing to collaborate with us on this project. Within a few weeks of the mutants arriving, the data started to emerge that we finally had found a single gene involved in yaw torque learning – and it couldn’t have been a more Skinnerian gene! In operant visual learning, the mutants did fine, just like the flies with the inhibited PKCs. After some additional experiments to make sure the results were solid, the work was published in 2014. It really started to look as if yaw torque learning shared more than just a conceptual similarity with vocal learning in vertebrates. It now appears the biological process underlying these forms of learning evolved in the last common ancestor of vertebrates and insects, some 500 million years ago.
Our next experiments started to capitalize on this discovery. In 2012, I had become a professor in Regensburg and a few years later, one of my grants on this research question got funded. So we hired two PhD students to help me. One of them, Ottavia Palazzo, created a suite of genetic tools manipulating the FoxP gene and the neurons where it was expressed. Among other things, it turned out that FoxP was also expressed in motor neurons in the ventral nerve cord, the neurons where PKC was required for yaw torque learning. The work started in late 2017 and in the following year, a sabotage case, where a (likely mentally ill) postdoc in the lab kept damaging (and eventually destroyed) some of our equipment, brought most of these experiments on operant learning to a screeching halt. Soon after we had unmasked and fired the saboteur (which took the better part of another year), the Covid19 pandemic started, making everything even tougher on everyone. We managed to publish the work we had done with FoxPin December 2020. By that time, the gene scissors CRISPR/Cas9 had become really useful for genetic manipulations. Ottavia Palazzo had already used them to create the FoxP tools and the second graduate student on the grant, Andreas Ehweiner, now “CRISPRed” some PKC genes we hadn’t been able to properly test, yet, and struck gold. It turned out, the atypical PKC (aPKC) was the PKC gene we had been looking for all these years! If you knocked aPKC out in FoxP neurons, yaw torque learning was severely impaired. So now we had two individual genes involved in operant learning.
This was a strong indication that FoxP and aPKC may act in the same neurons for yaw torque learning to work. A quick analysis of the expression patterns of the two genes suggested overlap in a specific set of motor neurons in the ventral nerve cord, namely the ones that control the angles of the wings to, e.g., create left- or right-turning yaw torque, the “steering motor neurons“. At the same time, we could not find any overlap between aPKC and FoxP in the brain at all, suggesting that the neurons we were looking for definitely did not reside inside the brain. Some other of Andreas’ experiments also seemed to confirm that. Now all of the evidence we had pointed away from the brain and to the steering motor neurons in the ventral nerve cord. So was it really them? To quantify which of these neurons actually expressed both aPKC and FoxP, collaborator and motor system expert Carsten Duch in Mainz, painstakingly dissected all the tiny muscles that are attached to the wing hinges and then analyzed which of the individual motor neuron terminals on the muscles showed the markers for both genes. He discovered that it really was just a very circumscribed subset of the steering motor neurons that expressed both genes and not all of them. Specifically, the neurons involved in generating the slow torque fluctuations were were conditioning in our yaw torque learning experiment were the ones expressing both genes and those involved in, e.g., fast body saccades, or thrust or roll, did not express both genes. All of this pointed to these specific neurons, but so far, it was only circumstantial evidence. What we needed to do now was to find some way to check if the aPKC/Foxp-dependent plasticity, as we suspected, was really taking place in these steering neurons and not by some bad luck in some neurons we hadn’t on our radar. It was 2023, and genetic lines that would allow us to target just the specific steering motor neurons for slow torque behavior and none of the others were just in the process of being generated, so there really wasn’t a perfect way for a genetic experiment just yet.
So we tried to come up with some other way to get more data on whether all the evidence that was pointing towards these neurons really was sufficient, or if there was some alternative explanation for our data that we hadn’t thought of. It hadn’t escaped our notice that the function of the motor neurons we were eyeing had been described in the context of optomotor responses. The optomotor response is an orienting behavior evoked by whole-field visual motion. Its algorithmic properties entail that the direction of the whole-field coherent motion dictates the direction of the behavioral output (e.g., leftward visual stimuli lead to turning left, and rightward visual stimuli lead to turning right). The currently available evidence suggests that this visual information, in flies, gets transmitted directly from the brain to the steering motor neurons via a set of descending neurons in the brain. With the brain being ruled out as a site of aPKC/FoxP-dependent plasticity, the steering neurons were the only conceivable overlap between optomotor responses and yaw torque learning. This meant that if yaw torque learning altered the steering neurons in some way, we may be able to detect this plasticity by some change in the optomotor response of the trained flies. Or, phrased differently, if we were to detect effects of yaw torque learning in the optomotor response after operant training, this would be very strong evidence that the plasticity we were looking for indeed was taking place in the steering motor neurons.
We tested this hypothesis by comparing optomotor responses before and after yaw torque learning. We found that the optomotor responses to visual stimuli that elicit responses in the turning direction previously not associated with the heat did not change. However, responses to stimuli that elicit optomotor responses in the punished direction were selectively reduced. The figure below shows the torque traces after training, separated into those flies that were punished on left-turning (yellow) and those punished on right-turning. Reference, untrained amplitudes are just over 500 units of torque, as one could guess from the respective unpunished directions in each group. Always the torque amplitude of the punished direction is reduced, i.e., right (clockwise) rotations elicit reduced torque in flies punished for right turning attempts (green), compared to the same responses in flies punished for left-turning attempts (yellow) and vice versa for counter-clockwise rotations.
These results showed that indeed the plasticity seems to happen in the motor neurons that innervate the steering muscles generating large torque fluctuations – so pretty much at the very last stage. The changes in optomotor responses after torque learning only explain about 30% of the variance in the torque data. This means that there are flies that show, e.g., strong preference for the unpunished turning direction, but only a weak reduction in optomotor response on that side, as well as flies that, say, show a strong reduction in optomotor response, but only a weak preference for the unpunished turning direction. This means that there are very likely additional mechanisms at play, but, at least for now, these mechanisms do not seem to depend on aPKC/FoxP. As of now, this seems difficult to reconcile with the observation that knocking out any one of aPKC or FoxP completely abolishes yaw torque learning to undetectable levels, but this is what future research is for.
Today, almost exactly 30 years after a seemingly simple question when I was an undergraduate, we are finally in the process of localizing mechanisms of plasticity for “Skinnerian”, operant learning to specific neurons and specific genes. Now we can finally begin to really exploit the vast toolbox of Drosophila neurogenetics on a much larger scale than before to find the remaining parts of the puzzle: Which other genes are involved in this pathway? What are the biochemical and physiological changes in which parts of the neurons that give rise to the behavioral changes? How does yaw torque learning interact with visual learning to make it happen faster – aka the “learning by doing” effect? Why do “classical” learning mutants learn yaw torque better? How is this “Skinnerian” learning regulated, such that it is faster under some conditions (like when there are no colors present, i.e., pure) and slower under others (i.e., when the experiment is “contaminated” with Pavlovian colors)?
On the one hand, it seems the future has now become more boring, because it is so clear what experiments have to come next. In the past, I never knew what experiments to do until the current experiment was done. Too much was unknown, too few conceptual principles understood. Now, one just needs to open the Drosophila toolbox and there are enough experiments jumping at you for the next decade. On the other hand, the future has never been more exciting: I never have felt that the future promised any major advances – it was all too uncertain. Now that we have the first genes and the first neurons, it feels like the sky is the limit and that the next research questions will be answered dramatically more quickly than ever before.
Only now, looking back 30 years after having started from essentially zero, it is dawning on me that, building on the early behavioral experiments of Martin Heisenberg, we have now managed to open up a tiny little research field with a huge potential. The genes we found indicate that the biological processes we study is at least 500 million years old and present in most animals, including humans. It appears to be involved not only in language learning, but also more generally in many other forms of motor learning. And there are few other preparations around anywhere, in which these processes can be studied both in such splendid isolation and how they interact with other learning processes as in, e.g., habit formation. The only problem is, I likely won’t have another 30 years to capitalize on the efforts of the last 30 years: mandatory retirement in Germany will hit me in just 15 years from now. Humans surely live too short for the speed of this kind of research.
Like this:
LikeLoading...
Posted on January 11, 2024at 15:55Comments Off on The speed (or lack thereof) of science
You may have seen a neutered version of this post over at the LSE blog. This post below, however, puts the tiger in the tank, as it was enhanced by CatGPT:
Maybe scholarly societies have taken “the instruction”follow the money!” a tad too literally? There now are societies that make 83% of their nearly US$ 700 million in revenue from publishing (American Chemical Society). Or 88% of US$130 million (American Psychological Association). Or 91% of US$5 million (Biochemical Society). In essence, societies like these (there are hundreds, especially in STEM fields) are publishers first and societies second (or fifth). One could be forgiven if one imagined their business meetings involved chanting, “Publish or Perish” while stacking green taller than a Himalayan cat tower. But wait, there’s more! Some of these organizations even side with corporate publishers against scholarship, e.g., when litigating against organizations or individuals striving to make research more accessible, or when begging wannabe-authoritarian rulers to protect their archaic, parasitic business models. Can it still be considered ethical to charge multiples of the publication costs of an article in order to finance executive salaries, subsidize member dues, sponsor prizes, host all-you-can-drink receptions at annual meetings or pay lawyers to ensure nobody can read the works of your scholars? Who needs scholarly integrity when you can have lucrative deals and lawyers on speed dial?
This cat-astrophic prioritization becomes even more absurd if one researches the role such societies have played in purr-suing their primary mission as ‘societies’: supporting scholars in making connections to like-minded individuals, exchanging ideas and promoting their respective fields of scholarly interest – in short ‘socializing’. For some of these ‘societies’ their mission apparently involves as much of such scholarly socializing as a hermit cat on a deserted island. There is a reason these organizations were called “societies” before they became publishers. The root of their names contains their essential function, as described in 1660 for one of the first such societies, the Royal Society: “Their first purpose was no more, then onely the satisfaction of breathing a freer air, and of conversing in quiet one with another, without being ingag’d in the passions, and madness of that dismal Age”. And isn’t it ironic that these very societies, born in an era of intellectual enlightenment, seem to have missed the memo about social media’s advent over 15 years ago? Were they chasing cash like a cat the laser dot or were they too busy debating the financial advantages of ink and parchment versus parchment and ink?
Is it possible that maybe one reason these scholarly societies missed the social media boat, is that their noses were buried too deep in financial spreadsheets to realize that there was a technology in the making that not only was about to transform the way their mission was going to be supported, but even shared the root of their names? Shouldn’t these bastions of scholarship, if they truly cared, have embraced FriendFeed or Facebook in their kittenhood back then? But why stir the litterbox, when there’s a chance it might disrupt the cash flow? Maybe many felt the threat such #icanhazpdf-technology may pose to their bottom line so acutely, they failed to envisage the opportunities it provided for their members? Was one reason why there was no huge movement from within the scholarly societies to be involved in the development of technology so central to the raison d’être of societies, that not enough of them actually cared sufficiently about scholarship any more? Each scholarly society is different and many have more or less belatedly embraced social technologies in one way or another now. However, it appears as if this engagement has only rarely exceeded the use of corporate platforms as broadcasting tools, rather than as a social technology that encourages, promotes and protects social interactions among scholars and with the general public.
Today, we have technology that allows scholarly societies to make good on past mistakes and show their true colors: the ‘fediverse’ provides tools and technologies that are ideally suited to finally bring scholarly societies out of their digital caves and into the 21st century. One of these is Mastodon, a decentralized social technology. While some scholarly institutions, including some societies, have started to implement their own Mastodon instances, the large majority still appear as as bewildered as a cat presented with a Rubik’s cube, struggling with their favorite corporate broadcasting platform formerly known as Twitter now having devolved into a racist misinformation cesspit.
Scholarly societies that take their mission and role for scholarship seriously have developed a keen understanding of social technologies, are using them not just for broadcasting but for scholarly exchange and to facilitate social interactions such as debate, discussion and critique among all persons interested in their research, not just their dues-paying members. The different local and federated timelines in Mastodon allow seamless interactions both within the society and outside of it, federation choices enable societies to choose which content purr-fectly matches their instance and they become the moderators of their own social media presence, instead of having to rely on the whims of billionaires. Where are the societies that see this opportunity in giving, e.g., marginalized groups within scholarship a voice in a town square protected by scholarly rules? Rather than being relegated to obedient mice for AdTech-based surveillance platforms, societies now have the opportunity (again!) to become the designers of a new kind of digital scholarship while at the same time contributing to protecting the privacy of scholars. Due to the open source nature of the Fediverse and the widespread digital competence in the scholarly community, there is ample potential for societies to take a central role in developing a new scholarly commons and integrate this social layer into the more formal literature as part of the “open, interoperable, not-for-profit infrastructures” the Council of the EU science ministers has recently called for.
Of course, their handling of social technology is just a litmus test for how seriously a learned society is taking its role in our modern world and what perspective it has taken with regard to scholarship more generally. It appears as if scholarly societies that are still genuinely interested in pursuing their core mission are as elusive as finding Schrödinger’s cat both inside and outside its box simultaneously. Instead, the majority seem more concerned with securing and protecting sufficient publication income to maintain five, six figure salaries for their execs.
So, to the scholarly societies out there, here’s a challenge: step up, embrace Mastodon (and any of the other cool fediverse options like peertube, owncast, writefreely, hubzilla, etc.), and give those faux-societies a run for their money. Show us you’re all about scholarship, not just financial catnip!
The DFG is a very progressive and modern funding agency. More than two years ago, the main German science funding agency signed the “Declaration on Research Assessment” DORA. The first point of this declaration reads “Do not use journal-based metrics […] as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.” Last year, the DFG joined the Coalition for Advancing Research Assessment” CoARA and sits on their Board. The CoARA principles also emphasize: “Abandon inappropriate uses in research assessment of journal- and publication-based metrics”. In their position paper from last year, the DFG states in two places in the executive summary:
the assessment of research based on bibliometrics can provide problematic incentives
and
A narrow focus in the system of attributing academic reputation – for example based on bibliometric indicators – not only has a detrimental effect on publication behaviour, it also fails to do justice to scholarship in all its diversity.
In a world in which impact factors and other bibliometric measures still reign supreme, these are laudable policies that set the DFG apart from other institutions. In fact, these steps are part and parcel of an organization with a long tradition of leveraging its power for good scholarly practices. Even before DORA/CoARA the DFG has continually evolved their policies to minimize the effect of publication venue on the assessment of applicants.
Given this long and consistent track-record, now complemented by two major official statements, one could be forgiven to think that applicants for funding at the DFG now feel assured that they will not be judged by their publication venues any longer. After all, journal prestige is correlated with experimental unreliability, so using it as an indicator clearly constitutes “inappropriate use of journal-based metrics”. With all this history, it came as a shock to many when earlier this year, one of the DFG panels deciding which grant proposals get funded, published an article in the German LaborJournal magazine that seemed to turn the long, hard work of the DFG in this area on its head. The panel starts by making the following statements (my translation of sentences 2-4):
Our panel evaluates grant proposals in all their dimensions and one such dimension is the qualification of the applicant by their publication record. In a changing [publication] landscape, it is not easy to choose the right journal for the publication of research results. In this regard, we would like to share some thoughts, such that your publication strategy may match the expectations of decision-making panels.
It seems obvious that in this order, these sentences send this message:
We decide grants by looking at publication records
You better follow our guidelines of where to publish if you want to meet our expectations (and that of other panels) to get funded.
Which is pretty much the opposite of what DORA and CoARA are all about and what the DFG says in their own position paper. Or, phrased differently, if this article were compatible with DORA, CoARA and the DFG position paper, neither of the three could be taken seriously any more. At the time of this writing, the DFG has not publicly responded, neither to personal alerts about the article, nor to DORA and CoARA which have also contacted them. At the very least, the DFG does not see the article as worrisome enough for a swift response – and this is troubling.
One less worrisome explanation for this slow reaction could be that this is just one panel in a large institution and the DFG has more important things to worry about. After all, both their position paper and in their press release, they support the plans of the EU Council of science ministers’ proposal to fund an open scholarly infrastructure instead of monopolistic publishers, i.e., the plan is to have this infrastructure replace academic journals. This is obviously a major undertaking and one that also requires research assessment to change. So maybe, with limited resources, the DFG is prioritizing the larger goal over mere research assessment? This explanation does not seem very likely: last week, the news broke that the DFG plans to join the recent DEAL agreement with data analytics corporation Elsevier. Similar to the article by the panel, this contract also embodies pretty much the opposite of DFG’s own stated goals, in this case one of establishing “open access infrastructures located at research organisations that operate without publication fees payable by authors and are not operated for profit.” The many reasons why entering this contract is a mistake have already been listed elsewhere. Here, the important aspect is that this decision would be already the second time this year that DFG practice is in direct opposition to their policies.
What could possibly be the reason for this sudden and very recent inconsistency between officially stated policies and actual practice at the largest German funder after so many years of very consistent development? Apparently, it was the DFG’s Board, led by president Katja Becker (since 2020), that forced the decision to join the Elsevier DEAL over the objections of expert committees. What could possibly have motivated the Board to side with the corporate giant Elsevier against not only their own scholarly expertise, but also their own public policies?
Both DORA/CoARA, if widely implemented, would weaken the stranglehold corporate publishers exert over public knowledge and as such would help pave the way for a publicly owned, not-for-profit scholarly infrastructure for said knowledge. With this overarching goal in mind, the public statements by the DFG are internally consistent over many years and appear competent, logical, coordinated and scholar-led. In this regard, the DFG sets an example for funding agencies world-wide. In fact, the EU-supported CoARA was, in part, designed to help support the plan by the EU science ministers for a scholarly infrastructure avoiding corporate vendor lock-in. All of this constitutes a paragon of evidence-based policy-making and stands to negatively affect current corporate publishers. As such, it is safe to assume that these corporate publishers interpret these public policies of the DFG as another threat to their parasitic business model. From this perspective, both the admonition to publish in “good” journals (read: in journals predominantly published by corporate publishers) and the decision to join the Elsevier-DEAL are decidedly publisher-friendly, propping up the status quo and delaying any modernization. Why would the DFG-Board, then, act in such a publisher-friendly way, when the DFG public policies are anything but?
Nobody but the Board members themselves can know this, of course. However, in the last few days, individuals with personal knowledge have been privately pointing to connections between the DFG president, Katja Becker, not only with Stefan von Holtzbrinck (whose company not only owns SpringerNature but also DigitalScience), but also with Elsevier CEO Kumsal Bayazit. Given the DFG’s public policies, both would be expected to voice strong opposition, when given a chance to speak in private with the president of the DFG, one would suppose. Or, alternatively, has the DFG become such a big and unwieldy organization that simply the left hand doesn’t know any more what the right hand is doing? Whatever the reason, this sudden inconsistency is troubling and the potential consequences pernicious. Many applicants, at least, would probably sleep better if this inconsistency were resolved.
Like this:
LikeLoading...
Posted on November 29, 2023at 13:32Comments Off on German funder DFG: Why the sudden inconsistency?