bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 170 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 89 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 197 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 503 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 750 downloads 0.00 KB
Download
Nov11

Chance in animate nature, day 2

In: science • Tags: chance, Drosophila, free will

While the first day (day 2, day 3) was dominated by philosophy, mathematics and other abstract discussions of chance, this day of our symposium started with a distinct biological focus.

Martin Heisenberg, Chance in brain and behavior

First speaker for this second day on the symposium on the role of chance in the living world was my thesis supervisor and mentor, Martin Heisenberg. Even if he hadn’t a massive body of his own work to contribute to this topic, just being the youngest son of Werner Heisenberg of uncertainty principle fame, made his presence interesting already from a science history perspective. In his talk, he showed many examples from the fruit fly Drosophila, which showed how the fly spontaneously chooses between different options, both in terms of behavior and in terms of visual attention. Central to his talk was the concept of outcome expectations in the organization of adaptive behavioral choice. Much of this work is published and can be easily found, so I won’t go into detail here.

Then came my talk where I presented a somewhat adjusted version of my talk on the organization of behavior, where I provide evidence how even invertebrate brains generate autonomy and liberate themselves from the environment:

Friedel Reischies, Limited Indeterminism – amplification of physically stochastic events

Third speaker this morning was Friedel Reischies, psychiatrist from Berlin. After introducing some general aspects of brain function, he discussed various aspects of the control of behavioral variability. He also talked about the concept of self and how we attribute agency to our actions, citing D. Wegner. Referring to individual psychiatric cases he talked about different aspects of freedom and how these cases differentially impinge on these aspects. Central theme of his talk was the variability of nervous systems / behavior and its control.

The discussion session after these first three talks circulated quite productively around intentionality, decision-making, free will and the concept of self.

Wolfgang Lenzen: Does freedom need chance?

The third speaker for this day was a philosopher, Wolfgang Lenzen. As it behooves a philosopher, he started out with an attempt to define the terms chance, possibility, necessity and contingency, as well as some of their variants. Here again, as yesterday, the principle of sufficient reason reared its head again. He then went back to Cicero and Augustine to exemplify the problem of free will with respect to determinism and causality. Later the determinist Hume was cited as the first compatibilist, allowing for an exception to determinism in the context of the will. Lenzen then described Schopenhauer as a determinist. Given the dominance of classical Newtonian mechanics, the determinism of the philosophers at the time are not surprising. The now dominant insights from relativity and quantum mechanics had a clear effect on the more recent philosophers. Lenzen then cited Schlick who predictably argued with the false dichotomy of our behavior either being determined or entirely random. Other contemporary determinist scholars cited were Roth and Prinz. In his (as I see it compatibilist) reply, he emphasized that free will is not dependent on the question of whether the world is deterministic. He also defined free will as something only adult humans have, that it requires empathy and theory of mind. In his view, animals do not possess free will as they do not reflect their actions. Hence, animals cannot be held responsible. Similar to other scholars, he listed three criteria for an action to be ‘free’: the person willed the action, the will is the cause of the action and the person could have acted otherwise.

Lenzen went on to disavow dualism: “there are no immaterial substances”. This implies that the soul or the mind as a complex mental/psychological human property is intimately, necessarily coupled to a healthy, material brain. It also implies that “mental causation” does not mean that immaterial mind interacts with a material brain. Mental causation can only be thought as an idea or thought being a neuronal activity which in principle or in actuality can move muscles.

Towards the end, Lenzen picked up the different variants of possibilities from his introduction to apply them to the different variants of alternative actions of an individual. At the end he recounted the story of Frankfurt‘s evil neurosurgeon as a “weird” example he didn’t find very useful.

Patrick Becker: Naturalizing the mind?

The final speaker for the day was a theologian and in my prejudice I expected pretty confuse magical thinking. I had no idea when he stated, how right I would be. Like some previous speakers, Becker also cited a lot of scholars (obviously a common method in the humanities) like Prinz, Metzinger, or Pauen. Pauen in particular served for the introduction of the terms autonomy and agency as necessary conditions for free will. In this context again, the false dichotomy of either chance or necessity being the only possible determinants of behavior, reared its ugly head. Becker went on to discuss Metzinger’s “Ego-Tunnel” and the concept of self as a construct of our brain, citing experiments such as the “rubber hand illusion“. It wasn’t clear to me what this example was meant to actually say. At the end of all this Becker presented a table where he juxtaposed a whole host of terms under ‘naturalization’ on one side and ‘common thought’ on the other side. The whole table looked like an arbitrary collection of false dichotomies to me and I again didn’t understand what the point of that was. He then picked ethical behavior as an example for how naturalization would lead to an abandonment of ethics. Here, again, the talk was full of false dichotomies such as: our ethics are not rational because some basic, likely evolved moral sentiments exist. As if it were impossible to combine the two. Not sure how that would be an answer to the question of his title. After ethics, he claimed that we would have to part with love and creativity as well if we naturalized the mind. None of what he talked about appeared even remotely coherent to me, nor did I understand how he came up with so many arbitrary juxtapositions of seemingly randomly collected terms and concepts. Similar to creationists, he posits that our empirically derived world-view is just a belief system – he even used the German word ‘Glaube’ which can denote both faith and belief. As if all of this wasn’t bad enough, at the very end, as a sort of conclusion or finale of this incoherent rambling, he explicitly juxtaposed (again!) the natural sciences and religion as equivalent, yet complementary descriptions of the world.

Like this:

Like Loading...
Posted on November 11, 2015 at 17:35 6 Comments
Nov10

Chance in animate nature, day 1

In: science • Tags: causality, chance, interdisciplinary, symposium

Ulrich Herkenrath, a mathematician working on stochasticity, convened a tiny symposium of only about a dozen participants discussing the role of chance in living beings. Participants included mathematicians, philosophers and neurobiologists.

Herkenrath: “Man as a source of randomness”

Herkenrath kicked off the symposium with his own presentation on “Man as a source of randomness”. He explained some principal insights on stochasticity and determinism as well as some boundary conditions for empirical studies on stochastic events, emphasizing that deterministic chaos and stochasticity can be extremely difficult to empirically distinguish.

In a short excursion, he referred to Nikolaus Cusanus, who found that no two subsequent situations can ever be exactly identical, our knowledge being thus essentially conjecture. Apparently, Cusanus was already proposing to falsify hypotheses as a means to approaching ‘truth’. Not surprisingly, he immediately referred to Popper with regards to the modern scientific method. Equally expectedly, when he started talking about kinds and sources of chance, he talked about quantum mechanics.

Moving from inanimate to living nature, he proposed amplifications of quantum chance to the macroscopic level as sources of objective randomness in the mesocosm, always emphasizing the difficulties in distinguishing between such events and events that only seem random due to our limited knowledge.  Contrasting two hypotheses of a deterministic world and one where objective randomness exists, he mentions the illusory nature of our subjective impression of freedom of choice. He never got into the problem that quantum randomness, if only amplified, leaves much to be desired in terms of decision-making. Essentially, he seemed to be arguing that a deterministic world would be a sad place in which he doesn’t want to live, so he rejects a deterministic world. I’ve never found this all too common argument very convincing.

Notably, Herkenrath mentioned that organisms are more than matter. Not sure what to make of this. He defined autonomy as the ability to make decisions that are not determined by the environment. Herkenrath went on to describe classes of decisions, such as subconscious and conscious decisions. How brains make these different forms of decisions will be featured in different talks at the symposium. Herkenrath defined a third class of decisions those that have come about by explicit (subconscious or conscious) randomization. A fourth class is proposed, where a uniform distribution is consciously generated, e.g. a human using a lottery.

Falkenburg: “Causality, Chance and Life”

The second speaker of the first day was Brigitte Falkenburg, author of “Mythos Determinismus” (book critique). She started out wondering how neural determinists understand evolution.

In Falkenburg’s tour de force through the idea history of chance and necessity, we first learned that the concept of chance itself can be traced back to Leibniz, who described events that may have happened otherwise. Leibniz claimed in his metaphysics that objective chance does not exist, as the whole world is rational and determined. According to Leibniz, everything has a sufficient reason. In a very scholarly segue mentioning the dispute between Leibniz and Newton about who invented calculus, she moved to the relationship between the laws of nature and chance. Kant extended Newtons mechanistic laws from the solar system to the entire universe (Kant-Laplace hypothesis). In his “critique of pure reason” Kant later concluded that Leibniz’s ‘sufficient reasons’ are better described as ’causes’ and formulated the principle of causality as an ‘a priori’ of human thinking. This was the start of the demand for causal explanations in the empirical sciences: science never stops asking for causes. However, Kant’s critique did not fully pervade the subsequent thinking, leading instead to Laplace‘s determinism. Laplace was convinced that our insufficient knowledge is the only reason for apparent (subjective) randomness, and a more knowledgeable intelligence would be able to tell the future (cf. Laplace’s demon).

With this backdrop of the idea history of causality, Falkenburg went on to discuss modern concepts of causality away from equating it with determinism. Both Hume and Kant defined causality as a mode of thinking, i.e., psychologically, rather than as a property of the universe. According to them, a causal relationship between events is subjective rather than objective. Mill‘s and Russell‘s positivism later did away with causality as “a relic of a bygone era” (Russell). One argument is that a cause can be seen as just a natural law and the initial state of a system. Deterministic laws are invariant to a reversal of time – as such,causes can also lie in the future.

Today’s philosophical variants of causality concepts reflect this comparatively weak view of causality, which are very different from the way we scientists would intuitively understand it. In a short discussion of the concept of causality in physics, she quickly went through classical mechanics, thermodynamics and quantum mechanics and special relativity, emphasizing that we still do not have a theory unifying these different approaches (she called it ‘patchwork physics’).

Towards the end, Falkenburg discussed the connection between causality and time, emphasizing that the arrow of time cannot have a deterministic basis and all deterministic laws are time reversible. As such, extreme determinism comes with a high metaphysical price: time becomes an illusion. According to Falkenburg, causality is hence not the same as determinism: a causal process is not necessarily deterministic, it can be composed of determinate and indeterminate components. Thus, if you do not think time is an illusion and all possible outcomes coexist, causality does not imply determinism and chance can be a cause as in, e.g. evolution.

At the very end she mentioned Greenfield and the limits of the natural sciences in reducing consciousness to materialism. I’m starting to get the impression that rejecting determinism all too often goes hand in hand with woo peddling. Why is that?

Like this:

Like Loading...
Posted on November 10, 2015 at 18:07 4 Comments
Oct23

Predatory Priorities

In: science politics • Tags: journals, open access, predatory publishing

Over the last few months, there has been a lot of talk about so-called “predatory publishers”, i.e., those corporations which publish journals, some or all of  which purport to peer-review submitted articles, but publish articles for a fee without actual peer-review. The origin of the discussion can be traced to a list of such publishers hosted by librarian Jeffrey Beall. Irrespective of the already questionable practice of putting entire corporations on a black list (one bad journal and you’re out?), I have three main positions in this discussion:

1. Beall’s list used to be a useful tool tracking a problem that nobody really had on their radar. Unfortunately, Jeffrey Beall himself recently opted to disqualify himself from reasoned debate, making the content of the list look more like a political hit list than a serious scholarly analysis. It appears that this approach may still be rescued if it were pursued by an organization more reliable than Beall.

2. There are many problems with publishers that eventually need to be solved. With respect to the pertinent topic, at least two main problem areas spring to mind.

2a. There is a group of publishers which publish the least reliable science. These publishers claim to perform a superior form of peer review (e.g. by denigrating other forms of peer-review as “peer-review light“), but in fact most of the submitted articles are never seen by peers (but instead by the professional editors of these journals). For this minority of articles that are indeed peer-reviewed, acceptance rate is about 40%. Sometimes this practice keeps other scientists unnecessarily busy, such as in replicability projects or #arseniclife. Sometimes this practice has deleterious effects on society, such as the recent LaCour or Stapel cases. Sometimes this practice leads indirectly to human death, such as in irreproducible cancer research. Sometimes this practice leads directly to human death, such as in the MMR/autism case.
These publishers charge the taxpayer on average US$5000 per article and try to use paywalls to prevent the taxpayer from checking the article for potential errors.

2b. There is a group of publishers which similarly claim to perform peer-review but in fact do not perform any peer-review at all. Apparently, it seems as if they aren’t even performing much editorial review. The acceptance rate in these journals is commonly a little more than twice as high as in the journals from 2a, i.e. ~100%. Other than the (likely very few) duped authors, to my knowledge there are no other harmed parties, but I may have missed them.
These publishers charge the taxpayer on average ~US$300 per article and do allow the taxpayer to check the articles for potential errors.

3. Clearly, both 2a and 2b need to be resolved, there can be no debate about that. Given the number and magnitude of issues with regard to infrastructure reform in general and publishing reform in particular, it is prudent to prioritize the problems. Given the larger harm the publishers in 2a inflict on the society at large as well as the scientific community, I would suggest to prioritize 2a over 2b. In fact, looking back over what little we have accomplished over the past 10 years of infrastructure reform, it doesn’t appear we have too many resources left to waste on 2b at this particular time. Moreover, if focusing on 2a were to lead to the demise of the journal container as so many of us hope, 2b will be solved without any further efforts.

Like this:

Like Loading...
Posted on October 23, 2015 at 16:21 37 Comments
Sep17

So many symptoms, only one disease: a public good in private hands

In: science politics • Tags: collective action, journal rank, publishing

Science has infected itself (voluntarily!) with a life-threatening parasite. It has  given away its crown jewels, the scientific knowledge contained in the scholarly archives, to entities with orthogonal interests: corporate publishers whose fiduciary duty is not knowledge dissemination or scholarly communication, but profit maximization. After a 350-year incubation time, the parasite has taken over the communication centers and drained them of their energy, leading to a number of different symptoms. Symptoms for which scientists and activists have come up with sometimes quite bizarre treatments:

  • In the recent #WikiGate, it is questioned if the open encyclopedia Wikipedia should link to (“advertise”) paywalled scientific knowledge at academic publishers such as Elsevier. One argument goes that if Wikipedia articles lack paywalled content and explicitly mention this, pressure on publishers to open the scholarly archives would increase. To solve this issue, open access advocates are now asking Wikipedia editors, who recently received free access to Elsevier’s archives, to assist academic publishers in keeping the paywalled content locked away from the public by not including it in Wikipedia.
  • The Hague Declaration on ContentMining asks for “legal clarity” with regards to science being done on scientific content: access and re-use of scholarly material via software-based research methods is restricted and heavily regulated by academic publishers, leveraging their extensive copyrights over the archives. The Liber open access initiative is now lobbying EU politicians for a “research exception” in international copyright laws to allow unrestricted ContentMining.
  • In recent decades, the number of researchers has been growing such that competition for publications in the few top-ranked journals has reached epic proportions. As a consequence, the amount of work (measured by figure panels or by numbers of authors per article) going into each individual paper has skyrocketed. This entails that the pace of dissemination for each project has been slowing down, not because of any technical or scientific reasons, but merely because of career decisions of scientists. To counteract this trend, it has been suggested to follow the example of physicists, and increase the work-load of scientists: once to publish their results quickly in a readily accessible repository for scholarly communication and once, later, to eventually lock the research behind a paywall in a little-read scholarly top-journal for career advancement.
  • These coveted top-rank journals also publish the least reliable science. However, it’s precisely the rare slots in these journals which eventually help the scientist secure a position as a PI (that’s the whole idea behind all the extra work in the previous example). This entails that for the last few decades, science has preferentially employed the scientists that produce the least reliable science. Perhaps not too surprisingly, we are now faced with a reproducibility crisis in science, with a concomitant exponential rise in retractions. Perhaps equally unsurprisingly, scientists reflexively sprung into action by starting research projects to first understand the size and scope of this symptom, before treating it. So now there exist several reproducibility initiatives in various fields in which scientists dedicate time, effort and research funds to find out if immediate action is necessary, or if corporate publishers can drain the public teat a little longer.
  • Already long before the magnitude of the disease and the number and spread of symptoms had become public knowledge, scientists have come up with two treatments to the symptom of lacking access to scientific knowledge: green and gold open access. Similar to the treatment of slowed down scientific reporting, green open access entails increasing researchers’ overhead by adding scholarly communication as a task on top of career advancement. As it is quite obvious what a scientist will have to choose when faced with choosing one of the two tasks due to limited time, green proponents are asking politicians and funders to mandate deposition in green repositories. The other option, the golden road to open access has now been hijacked by publishers as a way to cut paywall costs from their budget but maintain per-article revenue at similar levels, with the potential to double their already obscene profit margins of around 40%. This model of open access thus entails one of the few ways which is set to make everything worse than it already is. Coincidentally and much to everybody’s chagrin, these two parallel attempts have had the peculiar unintended consequence of splintering the reform movement and seemingly endless infighting. Consequently, the last decade has seen a pace of reform that makes plate tectonics look hurried.

I’ll leave it at these five randomly chosen examples, there are probably many more. While I understand and share the good intentions of all involved and applaud and support their effort, dedication, patience and passion, I can’t help but feel utterly depressed and frustrated by how little we have accomplished. Not counting the endless stream of meetings, presentations and workshops where always the same questions and ideas are being rehashed ad nauseam, our solutions essentially encompass three components:

  1. asking politicians, funders and lately even Wikipedia editors to help us clean up the mess we ourselves have caused to begin with
  2. wasting time with unnecessary extra paperwork
  3. wasting time and money with unnecessary extra research

What is it, that keeps us from being ‘radical’ in the best sense of the word? The Latin word ‘radix‘ means ‘root’: we have to tackle the common root of all the problems and that is the fact that knowledge is a public good that belongs to the public, not to for-profit corporations. The archiving and making accessible of this knowledge has become so cheap, that publishers are now not merely unnecessary, on top of the pernicious symptoms described above, they also increase these costs from what currently would amount to approx. US$200m world-wide per year to a whopping US$10b in annual subscription fees.

I’m not the only one, not even the first to propose taking back the public good from the corporations, as well as the US$10b we spend annually to keep it locked away from the public. If we did that, we would only have to spend a tiny fraction (about 2%) of the annual costs we just saved to give the public good back to the public. The remaining US$9.8b are a formidable annual budget to ensure we hire the scientists with the most reliable results.

This plan entails two initial actions: one is to cut subscriptions to regain access to the funds required to implement a modern scholarly infrastructure. The other is to use the existing mechanisms (e.g. LOCKSS) to ensure the back-archives remain accessible for us indefinitely. As many have realized, this is a collective action problem. If properly organized, this will bring the back-archives back into our control and provide us with sufficient leverage and funds to negotiate the terms at which they can be made publicly accessible. Subsequently, using the remaining subscription funds, the scholarly infrastructure will take care of all our scholarly communication needs: we have all the technology, it just needs to be implemented.  After a short transition period, at least in the sciences, publications in top-ranked journals (to which then only individuals subscribe, if any) will be about as irrelevant for promotion and funding as monographs are today.

This plan, if enacted, would save a lot of money, lives, time and effort and cure publicly funded science of a disease that threatens its very existence. I fear continued treatment of the symptoms will lead to the death of the patient. But which steps are required to make this treatment a reality? How can we orchestrate a significant nucleus of institutions to instantiate massive subscription cuts? How can we solve the collective action problem? These are the questions, to which I do not have any good answers.

Like this:

Like Loading...
Posted on September 17, 2015 at 16:41 57 Comments
Jul20

Evidence-resistant science leaders?

In: science politics • Tags: data, evidence, policy, politicians

Last week, I spent two days at a symposium entitled “Governance, Performance & Leadership of Research and Public Organizations“. The meeting gathered professionals from all walks of science and research: economists, psychologists, biologists, epidemiologists, engineers, jurists as well as politicians, university presidents and other leaders of the most respected research organizations in Germany. It was organized by Isabell Welpe, an economist specializing in incentive systems, broadly speaking. She managed to bring some major figures to this meeting, not only from Germany, but notably also John Ioannidis from the USA or Margit Osterloh from Switzerland. The German participants included former DFG president and now Leibniz president Matthias Kleiner (the DFG being the largest funder in Germany and the Leibniz Association consisting of 89 non-university federal research institutes), president of the German Council for Science and the Humanities, Manfred Prenzel, Secretary General of the Max-Planck Society Ludwig Kronthaler, or the president of Munich’s Technical University, Wolfgang Herrmann, only to mention some of them. Essentially, all major research organizations in Germany were represented with at least one of their leading positions, supplemented with expertise from abroad.

All of these people shape the way science will be done in the future either at their universities and institutions, or in Germany or around the world. They are decision-makers with the power to control the work and job situation for tens of thousands of current and future scientists. Hence, they ought to be the most problem-solving oriented, evidence-based individuals we can find. I was shocked to learn that this was an embarrassingly naive assumption.

In my defense, I was not alone in my incredulity, but maybe that only goes to show how insulated scientists are from the political realities. As usual, there were of course gradations between the individuals, but at the same time there seemed to be a discernible grouping in what could be termed the evidence-based camp (scientists and other professionals) and the ideology-based camp (the institutional leaders). With one exception I won’t attribute any of the instances I will recount to any particular individual, as we better focus on the solutions to the more general prohibitive  attitude, rather than on a debate about the individuals’ qualifications.

On the scientific side, the meeting brought together a number of thought leaders detailing how different components of the scientific community perform. For instance, we learned that peer-review is quite capable of weeding out obviously weak research proposals, but in establishing a ranking order among the non-flawed proposals, it is rarely better than chance. We learned that gender and institution biases are rampant in reviewers and that many rankings are devoid of any empirical basis. Essentially, neither peer-review nor metrics perform at the level we expect from them. It became clear that we need to find solutions to the lock-in effect, the Matthew effect and the performance paradox and to some extent what some potential solutions may be. Reassuringly, different people from different fields using data from different disciplines arrived at quite similar conclusions. The emerging picture was clear: we have quite a good empirical grasp of which approaches are and in particular which are not working. Importantly, as a community we have plenty of reasonable and realistic ideas of how to remedy the non-working components. However, whenever a particular piece of evidence was presented, one of the science leaders got up and proclaimed “In my experience, this does not happen” or “I cannot see this bias”, or “I have overseen a good 600 grant reviews in my career and these reviews worked just fine”. Looking back, an all too common scheme of this meeting for me was one of scientists presenting data and evidence, only to be countered by a prominent ex-scientist with a “I disagree without evidence”. It appeared quite obvious that we do not seem to suffer from a lack of insight, but rather from a lack of implementation.

Perhaps the most egregious and hence illustrative example was the behavior of the longest serving university president in Germany, Wolfgang Herrmann, during the final panel discussion (see #gplr on Twitter for pictures and live comments). This will be the one exception to the rule of not mentioning individuals. Herrmann was the first to talk and literally his first sentence was to emphasize that the most important objective for a university must be to get rid of the mediocre, incompetent and ignorant staff. He obviously did not include himself in that group, but made clear that he knew how to tell who should be classified as such. When asked which advice he would give university presidents, he replied by saying that they ought to rule autocratically, ideally by using ‘participation’ as a means of appeasing the underlings (he mentioned students and faculty), as most faculty were unfit for democracy anyway. Throughout the panel, Herrmann continually commended the German Excellence Initiative, in particular for a ‘raised international visibility’ (whatever that means), or ‘breaking up old structures’ (no idea). When I confronted him with the cold hard data that the only aspects of universities that showed any advantage from the initiative were their administrations and then asked why that didn’t show that the initiative had, in fact, failed spectacularly, his reply was: “I don’t think I need to answer that question”. In essence, this reply in particular and the repeated evidence-resistant attitude in general dismissed the entire symposium as a futile exercise of the ‘reality-based community‘, while the big leaders were out there creating the reality for the underlings to evaluate, study and measure.

Such behaviors are not surprising when we hear them from politicians, but from (ex-)scientists? At the first incidence or two, I still thought I had misheard or misunderstood – after all, there was little discernible reaction from the audience. Later I found out that not only I was shocked. After the conference, some attendees discussed several questions: Can years of leading a scientific institution really make you so completely impervious to evidence? Do such positions of power necessarily wipe out all scientific thinking, or wasn’t all that much of it there to begin with? Do we select for evidence-resistant science leaders or is being/becoming evidence-resistant in some way a prerequisite for striving for such a position? What if these ex-scientists have always had this nonchalant attitude towards data? Should we scrutinize their old work more closely for questionable research practices?

While for me personally such behavior would clearly and unambiguously disqualify the individual from any leading position, relieving these individuals from their responsibilities is probably not the best solution. Judging from the meeting last week, there are simply too many of them. Instead, it emerged from an informal discussion after the end of the symposium, that a more promising approach may be a different meeting format: one where the leaders aren’t propped up for target practice, but included in a cooperative format, where admitting that some things are in need of improvement does not lead to any loss of face. Clearly, the evidence and the data need to instruct policy. If decision-makers will be ignoring the outcomes of empirical research on the way we do science, we might as well drop all efforts to collect the evidence.

Apparently, this was the first such conference on a national level in Germany. If we can’t find a way for the data presented there to have a tangible consequence on science policy, it may well have been the last. Is this a phenomenon people observe in other countries as well, and if so, how are they trying to solve it?

Like this:

Like Loading...
Posted on July 20, 2015 at 21:50 17 Comments
Jun23

Whither now, Open Access?

In: science politics • Tags: infrastructure, open access

The recently discussed scenario of universal gold open access brought about by simply switching the subscriptions funds at libraries to have the libraries pay for author processing charges instead, seemed like a ghoulish nightmare. One of the few scenarios worse than the desolate state we call the status quo today. The latest news, however, seem to indicate that the corporate publishers are planning to shift the situation towards a reality that is even worse than that nightmare. Not only are publishers, as predicted, increasing their profits by plundering the public coffers to an even larger extent (which would be bad enough by itself), they are now also attempting to take over the institutional repositories that have grown over the last decade. If successful, this would undo much of the emancipation we have wrought from the publisher oligopoly. This move can only be intended to assure that our crown jewels stay with the publishers, rather than where they belong, in our institutions. Apparently, some libraries are all too eager to get rid of their primary reason d’être: to archive and make accessible the works of their faculty.

Publisher behavior over the last decade has been nothing short of a huge disappointment at best and an outright insult at worst. I cannot fathom a single reason why we should let corporate publishers continue to parasitize our labor. If even the supposedly good guys can be seen as not acting in our best interest, what are we left with? How can we ever entrust our most valuable assets to organizations that have proven time and again that they will abuse our trust for profit? Why is there still a single scientist left, with the opinion that “the current publishing model works well”, let alone a plurality?

These recent developments re-emphasize that none of our current approaches to solve the access problem (gold, green or hybrid) are sustainable by themselves. It is in our own best interest (and hence the tax-payers’ who fund us) to put publishers out of business for good. If we strive for our institutions and hence us to regain and stay in control of our own works, be that the code we develop, the data we collect or the text summaries we write, then we need a new approach and that is to cut subscriptions on a massive scale in order to free the funds to implement a modern scholarly infrastructure. This infrastructure will not only solve the access problem that most people care so much about, but simultaneously ameliorate the counter-productive incentives currently in place and help improve the replication crisis.

I do not think it is reasonable to try to solve the access problem at the expense of all the other, numerous and potentially more pernicious shortcomings of our current infrastructure, even though there is a lot of momentum on the open access front these days. Why not take this momentum and use it to rationally transform the way we do science, taking all modern technology at our disposal, with the added benefit of also solving the access problem along the way? The result of blindly, frantically doctoring on one single symptom, ignoring the disease that is still festering, is all too likely the death of the patient.

tl;dr: Cut all subscriptions now!

Like this:

Like Loading...
Posted on June 23, 2015 at 12:52 17 Comments
Jun19

What happens to publishers that don’t maximize their profit?

In: science politics • Tags: open access, publishing

Lately, there has been some public dreaming going on about how one could just switch to open access publishing by converting subscription funds to author processing charges (APCs) and we’d have universal open access and the whole world would rejoice. Given that current average APCs have been found to be somewhat lower than current subscription costs (approx. US$3k vs. US$5k) per article, such a switch, at first, would have not one but two benefits: reduced overall publishing costs to the taxpayer/institution and full access to all scholarly literature for everyone. Who could possibly complain about that? Clearly, such a switch would be a win-win situation at least in the short term.

However, what would happen in the mid- to long-term? As nobody can foresee the future with any degree of accuracy, one way of projecting future developments is to look at past developments. The intent of the switch is to use library funds to cover APC charges for all published articles. This is a situation we have already had before. This is what happens when you allow publishers to negotiate prices with our librarians – hyperinflation:

Given this publisher track record, I think it is quite reasonable to remain somewhat skeptical that in the hypothetical future scenario of the librarian negotiating APCs with publishers, the publisher-librarian partnership will not again be lopsided in the publishers’ favor.

I’m not an economist, so I’d be delighted if there were one among the three people who read this blog (hi mom!), who might be able to answer the questions I have.

The major players in academic publishers are almost exclusively major international corporations: Elsevier, Springer, Wiley, Taylor and Francis, etc. As I understand it, it is their fiduciary duty to maximize the value for their shareholders, i.e., profit? So while the currently paid APCs per article (about US$3k) seem comparatively cheap (i.e., compared to currently US$5k for each subscription article), publishers would not be offering them, if that would entail a drop in their profit margins, which currently are on the order of 40%. As speculated before, a large component of current publisher revenue (of about US$10bn annually) appears to be spent on making sure nobody actually reads the articles we write (i.e., paywalls). This probably explains why the legacy subscription publishers today, despite receiving all their raw material for free and getting their quality control (peer-review) also done for free, still only post profit margins under 50%. Given that many non-profit open access organizations post actual publishing costs of under US$100, it is hard to imagine what else other than paywall infrastructure would cost that much, given that the main difference between these journals are the paywalls and not much else. By the way, precisely because the actual publishing process is so cheap, the majority of all open access journals do not even bother to charge any APCs at all. There is something beyond profits that makes subscription access so expensive and any OA scenario would make these costs disappear.

So let’s takes the quoted US$3k as a ballpark average for future APCs on a world-wide scale. That would mean institutional costs would drop from the current US$10bn to US$6bn annually world wide. Let’s also assume a generous US$300 of actual publishing costs per article, which is considerably more than current costs with arXiv (US$9) or SciELO (US$70-200) or current median APCs (US$0). If this switch would happen unopposed, the publishers would have increased their profit margin from ~40% to around 90% and saved the tax payer a pretty penny. So publishers, scientists and the public should be happy, shouldn’t they?

Taking the perspective of a publisher, this scenario also entails that the publishers have wasted around US$4bn in potential profits. After all, today’s figures show that the market is worth US$10bn even when nobody but a few libraries have access to the scholarly literature. In the future scenario, everyone has access. Undoubtedly, this will be hailed as great progress by everyone. After all, this is being used as the major reason for performing this switch right now. Obviously, increased profit margins from 40% to 90% is seen as a small price to pay for open access, isn’t it? Wouldn’t it be the fiduciary duty of corporate publishers to regain the lost US$4bn? After all, why should they receive less money for a better service? Obviously, neither their customers (we scientists and our librarians), nor the public minded an increase in profit from 40% to 90%. Why should they oppose an increase from 90% to 95% or to 99.9%? After all, if a lesser service (subscription) was able to extract US$10bn, shouldn’t a better service (open access) be able to extract 12 or 15bn from the public purse?

One might argue that this forecast is absurd, the journals compete with each other for authors! This argument forgets that we are not free to chose where we publish: only publications in high-ranking journals will secure your job in science. These journals are the most selective of all journals. In the extreme cases, they only publish 8% of all submitted articles. This is an expensive practice as even the rejected articles generate some costs. These journals are on record that they would have to charge around US$50,000 per article in APCs to maintain current profits. It is hence not surprising that also among open access journals, APCs correlate with their standing in the rankings and hence their selectivity:

It is reasonable to assume that authors in the future scenario will do the same they are doing now: compete not for the most non-selective journals (i.e., the cheapest), but for the most selective ones (i.e., the most expensive). Why should that change, only because now everybody is free to read the articles? The new publishing model would even exacerbate this pernicious tendency, rather then mitigate it. After all, it is already (wrongly) perceived that the selective journals publish the best science. If APCs become predictors of selectivity because selectivity is expensive, nobody will want to publish in a journal without or with low APCs, as this will carry the stigma of not being able to get published in the expensive/selective journals.

This, to me as a non-economist, seems to mirror the dynamics of any other market: the Tata is no competition for the Rolls Royce, not even the potential competition by Lamborghini is bringing down the prices of a Ferrari to that of a Tata, nor is Moët et Chandon bringing down the prices of Dom Perginon. On the contrary, in a world where only Rolls Royce and Dom Perignon count, publications in journals on the Tata or even the Moët et Chandon level will only be ignored. Moreover, if libraries keep paying the APCs, the ones who so desperately want the Rolls Royce don’t even have to pay the bill. Doesn’t this mean that any publisher who does not shoot for at least US$5k in their average APCs (better more) fails to fulfill their fiduciary duty in not one but two ways: not only will they lose out on potential profit, due to their low APCs, they will also lose market share and prestige. Thus, in this new scenario, if anything, the incentives for price hikes across the board are even higher than what they are today. Isn’t this scenario a perfect storm for runaway hyperinflation? Do unregulated markets without a luxury segment even exist?

One might then fall back on the argument that at least Fiat will compete with Peugeot for APCs, but that forgets that a physicist cannot publish their work in a biology journal. Then one might argue that mega-journals publish all research, but given the constant consolidation processes in unregulated markets (which is alive and well also in the publishing market as was just reported), there quickly won’t be many of these around any more such they are, again, free to increase prices. No matter how I try to turn the arguments around, I only see incentives for price hikes that will render the new system just as unsustainable as the current one, only worse: failure to pay leads to a failure to make your discovery public and no #icanhazpdf can mitigate that. Again, as before, this kind of scenario can only be worse than what we have now.

tl:dr: The incentives for price hikes in a universal gold open access economy will be even stronger than they are today.

Like this:

Like Loading...
Posted on June 19, 2015 at 14:19 38 Comments
Jun18

Are more retractions due to more scrutiny?

In: science politics • Tags: fraud, impact factor, journal rank, methodology, retractions

In the last “Science Weekly” podcast from the Guardian, the topic was retractions.  At about 20:29 into the episode, Hannah Devlin asked, whether the reason ‘top’ journals retract more articles may be because of increased scrutiny there.

The underlying assumption is very reasonable, as many more eyes see each paper in such journals and the motivation to shoot down such high-profile papers might also be higher. However, the question has actually been addressed in the scientific literature and the data don’t seem to support this assumption. For one, this figure shows that there are a lot of retractions from lower ranking journals, but the journals who retract a lot are few and far between. In fact, there are many more retractions in low-ranking journals than in high-ranking ones. Of the high-ranking journals, a much larger proportion also retracts many papers. However, this analysis only shows that there are many more retractions in lower journals than in higher journals on an absolute level. Hence, these data are not conclusive, but suggestive that scrutiny is not really all that much higher for the ‘top’ journals than anywhere else.

Another reason why scrutiny might be assumed to be higher in ‘top’ journals is that readership is higher, leading to more potential for error detection. However, the same reasoning can be applied to citations, and not only retractions. Moreover, citing a ‘top’ paper is not only easier than forcing a retraction, it also benefits your own research by elevating the perceived importance of your field. Thus, if readership had any such influence, one would expect journal rank to correlate better with citations than with retractions. The opposite is the case: The coefficient of determination for citations with journal rank currently lies around 0.2, while that coefficient comes to lie at just under 0.8 for retractions and journal rank (Fig. 3 and Fig. 1D, respectively, here). So while there may be a small effect of scrutiny/motivation, the evidence seems to suggest that it is a relatively minor effect, if there is one at all.

Conversely, there is quite solid evidence that the methodology in ‘top’ journals is not any better than in other journals, when analyzing non-retracted articles. In fact, there are studies showing that the methodology is actually worse in ‘top’ journals, while we have not found a single study suggesting the methodology gets better with journal rank. Our article reviews these studies. Importantly, these studies all concern non-retracted papers, i.e., the other 99.95% of the literature.

In conclusion, the evidence suggests scrutiny is likely a negligible factor in the correlation of journal rank and retractions, while increased incidence of fraud and lower methodological standards can be shown.

I know Ivan Oransky, who was a guest on the show, is aware of these data, so it may have been a bit unfortunate that Phillip Campbell (editor-in-chief at Nature Magazine) got to answer this question before Ivan had a chance to lay these data out. In fact, Nature is also aware of these data and has twice refused publishing them. The first time when we submitted our manuscript, with the statement, that Nature had already published articles that stated that Nature publishes the worst science. The second time was when Cori Lok interviewed Jon Tennant and he told her about the data, but Cori failed to include this part of the interview. There is thus a record of Nature, very understandably, avoiding to admit their failure to select for solid science. Phillip Campbell’s answer to the question in the podcast may have been at least the third time.

While Phillip Campbell did admit they don’t do enough fraud-detection (it is too rare), the issue of reliability in science goes far beyond fraud, so successfully derailing the question towards this direction served his company quite well. Clearly, he’s a clever guy and did not come unprepared.

Finally, one may ask: why do the ‘top’ journals publish unreliable science?

Probably the most important factor is that they attract “too good to be true” results, but only apply “peer-review light”: rejection rates drop dramatically from 92% to a mere 60% once your manuscript makes it past the editors, that’s a 5-fold increase in your publication chances (Noah Gray and Henry Gee, pers. comm.). Why is that so? First, the reviewers know the editor wants to publish this paper. Second, they have an automatic conflict of interest, as a Nature paper in their field increases the visibility of their field, they may even be cited in the paper – or plan to cite it in their upcoming grant application.

On average, this entire model is just a recipe for disaster and more policing won’t fix it. By using it, we have been setting us up for the exponential rise in retractions to be seen in Fig. 1a of our paper.

So, in the probably not too unlikely case that the topic of unreliable science should come up again, anyone can now cite the actual, peer-reviewed data we have at hand, such that editors-in-chief may have a harder time derailing the discussion and obfuscating the issues in the future.

tl;dr: The data suggest a combination of three factors leading to more retractions in ‘top’ journals: 1. Worse methodological quality; 2. Higher incidence of fraud 3. Peer-review light. One would intuitively expect increased readership/scrutiny to play some role, but there is currently no evidence for it and some circumstantial evidence against it.

Like this:

Like Loading...
Posted on June 18, 2015 at 14:38 6 Comments
Jun11

What goes into making a scientific manuscript public?

In: science politics • Tags: publishing

Currently, our libraries are paying about US$5000 per peer-reviewed subscription article. What is more difficult to find out is where all that money goes. Which steps are required to make an accepted manuscript public? Because of their high-throughput (about 1200 journals with a total of about half a million published articles), low-cost, open access publishing model, I’ve contacted SciELO and asked them how they achieve such low costs – figures that range below US$100 per article, a fraction of commercial publishers. Abel Packer, one of the founders of SciELO, was so kind to answer all my questions.

SciELO receives most of its articles from the participating journals in JATS XML and PDF. It takes that version and publishes it online, makes sure it is indexed in the relevant places (PubMed, Web of Science, etc.) and archives it for long-term accessibility. These services cost about US$67 (which are covered by the participating governments, not the authors). For other digital services such as a DOI, plagiarism checkers, altmetrics, etc. another US$4 are incurred. So bare-necessities-publishing costs are just over US$70 per article at SciELO.

However, this comparison is not quite fair, as only few publishers receive their manuscripts in XML. So for those journals, which do not have an authoring environment such as PeerJ‘s “Paper Now” in which you can write your paper and submit it in the right format, there will be costs associated with editors who handle manuscript submissions, peer-review, as well as generating all kinds of formats (XML, PDF, ePub, etc., including proofs going back and forth between authors and staff) from the submitted files (LaTex, Word, etc.). At SciELO (and their participating journals), these services, if chosen by the authors, average around another US$130. Taken together, the complete package from, say, MS Word submission to published article can thus be had for a grand total of just over US$200. However, if/once authors use modern authoring systems, where they write collaboratively on a manuscript that is formatted in, e.g., native XML, then publication costs drop to below US$80. On the other hand, if SciELO authors opt for English language services, submission services, an enhanced PDF version, a social media campaign, and/or data management services – all offered by SciELO for a fee – a cozy all-inclusive package will cost them almost US$600, but still a far cry from the 5k commercial publishers charge for their subscription articles.

If even the most comfortable publishing option with all the bells and whistles can be had for just under US$600, why do current publishers succeed in persuading authors and institutions to pay author processing charges (APCs) averaging around €2000? There is an easy answer to that: currently, each subscription article generates 5k in revenue for the publisher. as such, publishers will strive to approach this figure with their APCs to fight a drop in their revenues. If that is the case, one might ask, why the average figures are not closer to 5k? One reason may indeed by competition by new publishers offering APCs dramatically below 5k. However, I think there may also be another/additional reason: The numbers above appear to corroborate a conclusion from last July, that subscription paywalls may indeed incur a cost in the neighborhood of around US$3000 per article. Dropping these 3k in paywall costs from per article revenue targets of 5k, leads to approximately the average APC which these publishers were able to charge from the institutions studied in the Schimmer et al. white paper. In such a scenario, publishers would keep the per-article-profit of just under 2k roughly constant.

This then means, of course, that the only thing the proposed switch from subscriptions to APCs would do is increase the profit margins of corporate publishers from currently just shy of 40% to about 90% – any publisher’s wet dream. As I’ve outlined before, this is probably the only way to make the abysmal status quo even worse, as it wouldn’t fix any of the other problems we have, besides access, and would exclude the scholarly poor from publishing in the venues that secure a job. Unregulated, universal gold open access has to be avoided by any means necessary.

Like this:

Like Loading...
Posted on June 11, 2015 at 14:34 13 Comments
May09

Is this supposed to be the best Elsevier can muster?

In: science politics • Tags: copyright, Elsevier, publishing

Until today, I was quite proud of myself for not caving in to SIWOTI syndrome like Mike Taylor did. And then I read his post and caved in as well.

What gets us so riled up is Elsevier’s latest in a long list of demonstration of what they think of the intellectual capacities of their customers. It’s precisely because it is only one in a long row that I initially didn’t feel like commenting. However, there were so many points in this article worth rebutting and Mike only selected a few for his comment, I felt compelled to pick some of my own for dissection.

This combination of perspectives has led me, I believe, to a deeper understanding of the importance and limitations of copyright law

Great! An expert on copyright law and a creative mind, Shirley a worthy opponent for a mere scientist who understands next to nothing of copyright. As we say in Germany: many foes, much honor (I know, don’t ask!).

The STM journal publishing sector is constantly adjusting to find the right balance between researcher needs and the journal business model, as refracted through copyright.

I think that’s an accurate assessment, the right balance of course being to continuously expand copyright to rob scientist authors of any rights to their articles and allowing publishers to charge authors for every time they use their own figures in class: the researcher needs to use their figures in teaching and the journal business model needs to beat drug smuggling, arms trade and human trafficking in profit margins. Thus, the right balance from this perspective is to charge institutions for every additional use of the material they already paid for twice, absolutely. However, while maximizing profits may be the fiduciary duty of the corporate publisher, neither science nor society cares about the existence of corporations. On the contrary, openly parasitic corporations will be fought, so it’s difficult to see how alluding to the parasitic business model of his business (rather than, e.g., trying to hide it) is in the interest of the author. One probably has to have the creative mind of a poet and the honed analytic skills of a lawyer to see the advantage here.

Authors are looking for visibility and want to share their results quickly with their colleagues and others in their institutions or communities.

That’s also correct: it’s the reason there is a boycott of Elsevier and the open access movement exist. It’s not clear why the author of this article is using an argument against copyright in particular and the entire status quo in academic publishing in general in an article purporting to support copyright. Again, I’m probably neither creative enough nor versed in law well enough to understand the apparent own goals of this author.

Most journals have a long tradition of supporting pre-print posting and enabling “scholarly sharing” by authors.

I’m sure some journals have that tradition, but Elsevier’s journals are not among them. On digital ‘paper’ of course, Elsevier supports pre-prints and ‘green’ archiving, but if that isn’t just lip service, why pay two US lawmakers US$40,000 to make precisely this “scholarly sharing” (note the scare quotes!) illegal? Or is the author insinuating that the legal counsel of Elsevier had no role in drafting the RWA?

In fact, last week Elsevier released its own updated sharing policies

Wait a minute – Elsevier has released a set of policies which specify how scientists are allowed to talk about their research? How on earth is this supposed to convince anyone that copyright is good for science and scientists if a scientist first has to ask the approval of a commercial publisher before they start talking about their research? I went and read these policies; they essentially say: “break our copyright by tweeting your own article and we’ll sue you”. I guess I really lack the creativity and expertise to understand how this is in support of copyright.

I believe that copyright is fundamental to creativity and innovation because without it the mechanisms for funding and financial support for authors and publishing become overly dependent on societal largesse.

Given my lack of poetry and legal competence, I really have to think hard now. He’s writing that we as scientist authors shouldn’t be dependent on “societal largesse”. Science is, for the most part, a public enterprise. This means my salary (without any copyright) is paid by “societal largesse”. 80% of subscription income of Elsevier is from public institutions, so this author suckles 80% of his income from the teet of “societal largesse”. So he’s arguing that copyright helped prevent his own salary from going 100% societal? Or is he arguing that I should lose all my salary? If depending on “societal largesse” really is to be avoided, why doesn’t he give 80% of his salary (which is probably more money than 100% of my salary) back to the tax payer, perhaps by contributing to the open access funds of a public institution of his choice? Going after the salaries of your customers without displaying any willingness to give up your own dependence on “societal largesse” must be a strategy that requires a lot of creativity and legal competence, as from my unimaginative and incompetent perspective that strategy just backfired mightily.

The alternatives to a copyright-based market for published works and other creative works are based on near-medieval concepts of patronage, government subsidy,

“Societal largesse” and “government subsidy” are what looms without copyright? I thought the only thing that kept Elsevier alive was government subsidies enabled by societal largesse? Last I looked, open access publishing would cost something like US$100-200 per article if we implemented it right. Elsevier, on  average, charges about US$5,000 per subscription article. So, on average, for each subscription article, Elsevier is receiving at least $4,800 in government subsidies (which amounts to 96% of the total payment), solely to artificially keep this corporation alive. If the author were so keen on getting rid of government subsidies, why is he asking his customers to support a business practice that only exists because their income is 96% government subsidies? Indeed, I’m neither intelligent nor competent enough to understand this defense of copyright. To me, this article is an attack on the entire status quo.

I’m running out of time and honestly, whatever could come next would be difficult to change the impression I now got from reading thus far. Clearly, Elsevier thinks that their scientist customers are know-nothing hobos with an insufficient number of neurons for a synapse. Either they are correct as I for the life of me cannot find anything in support of copyright in this article, or their big shots suffer from a severe case of Dunning Kruger syndrome.

Like this:

Like Loading...
Posted on May 9, 2015 at 13:21 20 Comments
  • Page 12 of 22
  • « First
  • «
  • 10
  • 11
  • 12
  • 13
  • 14
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,084 views)
  • Sci-Hub as necessary, effective civil disobedience (23,038 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,524 views)
  • Booming university administrations (12,918 views)
  • What should a modern scientific infrastructure look like? (11,479 views)
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Embrace the uncertainty
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Aplysia biting
Aplysia biting

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d