bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Rechnungshof und DEAL 85 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 383 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 658 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1556 downloads 0.00 KB
Download
Icon
Evidence for motor neuron plasticity as a major contributor to motor learning in Drosophila 1498 downloads 0.00 KB
Download
Jun23

Whither now, Open Access?

In: science politics • Tags: infrastructure, open access

The recently discussed scenario of universal gold open access brought about by simply switching the subscriptions funds at libraries to have the libraries pay for author processing charges instead, seemed like a ghoulish nightmare. One of the few scenarios worse than the desolate state we call the status quo today. The latest news, however, seem to indicate that the corporate publishers are planning to shift the situation towards a reality that is even worse than that nightmare. Not only are publishers, as predicted, increasing their profits by plundering the public coffers to an even larger extent (which would be bad enough by itself), they are now also attempting to take over the institutional repositories that have grown over the last decade. If successful, this would undo much of the emancipation we have wrought from the publisher oligopoly. This move can only be intended to assure that our crown jewels stay with the publishers, rather than where they belong, in our institutions. Apparently, some libraries are all too eager to get rid of their primary reason d’être: to archive and make accessible the works of their faculty.

Publisher behavior over the last decade has been nothing short of a huge disappointment at best and an outright insult at worst. I cannot fathom a single reason why we should let corporate publishers continue to parasitize our labor. If even the supposedly good guys can be seen as not acting in our best interest, what are we left with? How can we ever entrust our most valuable assets to organizations that have proven time and again that they will abuse our trust for profit? Why is there still a single scientist left, with the opinion that “the current publishing model works well”, let alone a plurality?

These recent developments re-emphasize that none of our current approaches to solve the access problem (gold, green or hybrid) are sustainable by themselves. It is in our own best interest (and hence the tax-payers’ who fund us) to put publishers out of business for good. If we strive for our institutions and hence us to regain and stay in control of our own works, be that the code we develop, the data we collect or the text summaries we write, then we need a new approach and that is to cut subscriptions on a massive scale in order to free the funds to implement a modern scholarly infrastructure. This infrastructure will not only solve the access problem that most people care so much about, but simultaneously ameliorate the counter-productive incentives currently in place and help improve the replication crisis.

I do not think it is reasonable to try to solve the access problem at the expense of all the other, numerous and potentially more pernicious shortcomings of our current infrastructure, even though there is a lot of momentum on the open access front these days. Why not take this momentum and use it to rationally transform the way we do science, taking all modern technology at our disposal, with the added benefit of also solving the access problem along the way? The result of blindly, frantically doctoring on one single symptom, ignoring the disease that is still festering, is all too likely the death of the patient.

tl;dr: Cut all subscriptions now!

Like this:

Like Loading...
Posted on June 23, 2015 at 12:52 17 Comments
Jun19

What happens to publishers that don’t maximize their profit?

In: science politics • Tags: open access, publishing

Lately, there has been some public dreaming going on about how one could just switch to open access publishing by converting subscription funds to author processing charges (APCs) and we’d have universal open access and the whole world would rejoice. Given that current average APCs have been found to be somewhat lower than current subscription costs (approx. US$3k vs. US$5k) per article, such a switch, at first, would have not one but two benefits: reduced overall publishing costs to the taxpayer/institution and full access to all scholarly literature for everyone. Who could possibly complain about that? Clearly, such a switch would be a win-win situation at least in the short term.

However, what would happen in the mid- to long-term? As nobody can foresee the future with any degree of accuracy, one way of projecting future developments is to look at past developments. The intent of the switch is to use library funds to cover APC charges for all published articles. This is a situation we have already had before. This is what happens when you allow publishers to negotiate prices with our librarians – hyperinflation:

Given this publisher track record, I think it is quite reasonable to remain somewhat skeptical that in the hypothetical future scenario of the librarian negotiating APCs with publishers, the publisher-librarian partnership will not again be lopsided in the publishers’ favor.

I’m not an economist, so I’d be delighted if there were one among the three people who read this blog (hi mom!), who might be able to answer the questions I have.

The major players in academic publishers are almost exclusively major international corporations: Elsevier, Springer, Wiley, Taylor and Francis, etc. As I understand it, it is their fiduciary duty to maximize the value for their shareholders, i.e., profit? So while the currently paid APCs per article (about US$3k) seem comparatively cheap (i.e., compared to currently US$5k for each subscription article), publishers would not be offering them, if that would entail a drop in their profit margins, which currently are on the order of 40%. As speculated before, a large component of current publisher revenue (of about US$10bn annually) appears to be spent on making sure nobody actually reads the articles we write (i.e., paywalls). This probably explains why the legacy subscription publishers today, despite receiving all their raw material for free and getting their quality control (peer-review) also done for free, still only post profit margins under 50%. Given that many non-profit open access organizations post actual publishing costs of under US$100, it is hard to imagine what else other than paywall infrastructure would cost that much, given that the main difference between these journals are the paywalls and not much else. By the way, precisely because the actual publishing process is so cheap, the majority of all open access journals do not even bother to charge any APCs at all. There is something beyond profits that makes subscription access so expensive and any OA scenario would make these costs disappear.

So let’s takes the quoted US$3k as a ballpark average for future APCs on a world-wide scale. That would mean institutional costs would drop from the current US$10bn to US$6bn annually world wide. Let’s also assume a generous US$300 of actual publishing costs per article, which is considerably more than current costs with arXiv (US$9) or SciELO (US$70-200) or current median APCs (US$0). If this switch would happen unopposed, the publishers would have increased their profit margin from ~40% to around 90% and saved the tax payer a pretty penny. So publishers, scientists and the public should be happy, shouldn’t they?

Taking the perspective of a publisher, this scenario also entails that the publishers have wasted around US$4bn in potential profits. After all, today’s figures show that the market is worth US$10bn even when nobody but a few libraries have access to the scholarly literature. In the future scenario, everyone has access. Undoubtedly, this will be hailed as great progress by everyone. After all, this is being used as the major reason for performing this switch right now. Obviously, increased profit margins from 40% to 90% is seen as a small price to pay for open access, isn’t it? Wouldn’t it be the fiduciary duty of corporate publishers to regain the lost US$4bn? After all, why should they receive less money for a better service? Obviously, neither their customers (we scientists and our librarians), nor the public minded an increase in profit from 40% to 90%. Why should they oppose an increase from 90% to 95% or to 99.9%? After all, if a lesser service (subscription) was able to extract US$10bn, shouldn’t a better service (open access) be able to extract 12 or 15bn from the public purse?

One might argue that this forecast is absurd, the journals compete with each other for authors! This argument forgets that we are not free to chose where we publish: only publications in high-ranking journals will secure your job in science. These journals are the most selective of all journals. In the extreme cases, they only publish 8% of all submitted articles. This is an expensive practice as even the rejected articles generate some costs. These journals are on record that they would have to charge around US$50,000 per article in APCs to maintain current profits. It is hence not surprising that also among open access journals, APCs correlate with their standing in the rankings and hence their selectivity:

It is reasonable to assume that authors in the future scenario will do the same they are doing now: compete not for the most non-selective journals (i.e., the cheapest), but for the most selective ones (i.e., the most expensive). Why should that change, only because now everybody is free to read the articles? The new publishing model would even exacerbate this pernicious tendency, rather then mitigate it. After all, it is already (wrongly) perceived that the selective journals publish the best science. If APCs become predictors of selectivity because selectivity is expensive, nobody will want to publish in a journal without or with low APCs, as this will carry the stigma of not being able to get published in the expensive/selective journals.

This, to me as a non-economist, seems to mirror the dynamics of any other market: the Tata is no competition for the Rolls Royce, not even the potential competition by Lamborghini is bringing down the prices of a Ferrari to that of a Tata, nor is Moët et Chandon bringing down the prices of Dom Perginon. On the contrary, in a world where only Rolls Royce and Dom Perignon count, publications in journals on the Tata or even the Moët et Chandon level will only be ignored. Moreover, if libraries keep paying the APCs, the ones who so desperately want the Rolls Royce don’t even have to pay the bill. Doesn’t this mean that any publisher who does not shoot for at least US$5k in their average APCs (better more) fails to fulfill their fiduciary duty in not one but two ways: not only will they lose out on potential profit, due to their low APCs, they will also lose market share and prestige. Thus, in this new scenario, if anything, the incentives for price hikes across the board are even higher than what they are today. Isn’t this scenario a perfect storm for runaway hyperinflation? Do unregulated markets without a luxury segment even exist?

One might then fall back on the argument that at least Fiat will compete with Peugeot for APCs, but that forgets that a physicist cannot publish their work in a biology journal. Then one might argue that mega-journals publish all research, but given the constant consolidation processes in unregulated markets (which is alive and well also in the publishing market as was just reported), there quickly won’t be many of these around any more such they are, again, free to increase prices. No matter how I try to turn the arguments around, I only see incentives for price hikes that will render the new system just as unsustainable as the current one, only worse: failure to pay leads to a failure to make your discovery public and no #icanhazpdf can mitigate that. Again, as before, this kind of scenario can only be worse than what we have now.

tl:dr: The incentives for price hikes in a universal gold open access economy will be even stronger than they are today.

Like this:

Like Loading...
Posted on June 19, 2015 at 14:19 38 Comments
Jun18

Are more retractions due to more scrutiny?

In: science politics • Tags: fraud, impact factor, journal rank, methodology, retractions

In the last “Science Weekly” podcast from the Guardian, the topic was retractions.  At about 20:29 into the episode, Hannah Devlin asked, whether the reason ‘top’ journals retract more articles may be because of increased scrutiny there.

The underlying assumption is very reasonable, as many more eyes see each paper in such journals and the motivation to shoot down such high-profile papers might also be higher. However, the question has actually been addressed in the scientific literature and the data don’t seem to support this assumption. For one, this figure shows that there are a lot of retractions from lower ranking journals, but the journals who retract a lot are few and far between. In fact, there are many more retractions in low-ranking journals than in high-ranking ones. Of the high-ranking journals, a much larger proportion also retracts many papers. However, this analysis only shows that there are many more retractions in lower journals than in higher journals on an absolute level. Hence, these data are not conclusive, but suggestive that scrutiny is not really all that much higher for the ‘top’ journals than anywhere else.

Another reason why scrutiny might be assumed to be higher in ‘top’ journals is that readership is higher, leading to more potential for error detection. However, the same reasoning can be applied to citations, and not only retractions. Moreover, citing a ‘top’ paper is not only easier than forcing a retraction, it also benefits your own research by elevating the perceived importance of your field. Thus, if readership had any such influence, one would expect journal rank to correlate better with citations than with retractions. The opposite is the case: The coefficient of determination for citations with journal rank currently lies around 0.2, while that coefficient comes to lie at just under 0.8 for retractions and journal rank (Fig. 3 and Fig. 1D, respectively, here). So while there may be a small effect of scrutiny/motivation, the evidence seems to suggest that it is a relatively minor effect, if there is one at all.

Conversely, there is quite solid evidence that the methodology in ‘top’ journals is not any better than in other journals, when analyzing non-retracted articles. In fact, there are studies showing that the methodology is actually worse in ‘top’ journals, while we have not found a single study suggesting the methodology gets better with journal rank. Our article reviews these studies. Importantly, these studies all concern non-retracted papers, i.e., the other 99.95% of the literature.

In conclusion, the evidence suggests scrutiny is likely a negligible factor in the correlation of journal rank and retractions, while increased incidence of fraud and lower methodological standards can be shown.

I know Ivan Oransky, who was a guest on the show, is aware of these data, so it may have been a bit unfortunate that Phillip Campbell (editor-in-chief at Nature Magazine) got to answer this question before Ivan had a chance to lay these data out. In fact, Nature is also aware of these data and has twice refused publishing them. The first time when we submitted our manuscript, with the statement, that Nature had already published articles that stated that Nature publishes the worst science. The second time was when Cori Lok interviewed Jon Tennant and he told her about the data, but Cori failed to include this part of the interview. There is thus a record of Nature, very understandably, avoiding to admit their failure to select for solid science. Phillip Campbell’s answer to the question in the podcast may have been at least the third time.

While Phillip Campbell did admit they don’t do enough fraud-detection (it is too rare), the issue of reliability in science goes far beyond fraud, so successfully derailing the question towards this direction served his company quite well. Clearly, he’s a clever guy and did not come unprepared.

Finally, one may ask: why do the ‘top’ journals publish unreliable science?

Probably the most important factor is that they attract “too good to be true” results, but only apply “peer-review light”: rejection rates drop dramatically from 92% to a mere 60% once your manuscript makes it past the editors, that’s a 5-fold increase in your publication chances (Noah Gray and Henry Gee, pers. comm.). Why is that so? First, the reviewers know the editor wants to publish this paper. Second, they have an automatic conflict of interest, as a Nature paper in their field increases the visibility of their field, they may even be cited in the paper – or plan to cite it in their upcoming grant application.

On average, this entire model is just a recipe for disaster and more policing won’t fix it. By using it, we have been setting us up for the exponential rise in retractions to be seen in Fig. 1a of our paper.

So, in the probably not too unlikely case that the topic of unreliable science should come up again, anyone can now cite the actual, peer-reviewed data we have at hand, such that editors-in-chief may have a harder time derailing the discussion and obfuscating the issues in the future.

tl;dr: The data suggest a combination of three factors leading to more retractions in ‘top’ journals: 1. Worse methodological quality; 2. Higher incidence of fraud 3. Peer-review light. One would intuitively expect increased readership/scrutiny to play some role, but there is currently no evidence for it and some circumstantial evidence against it.

Like this:

Like Loading...
Posted on June 18, 2015 at 14:38 6 Comments
Jun11

What goes into making a scientific manuscript public?

In: science politics • Tags: publishing

Currently, our libraries are paying about US$5000 per peer-reviewed subscription article. What is more difficult to find out is where all that money goes. Which steps are required to make an accepted manuscript public? Because of their high-throughput (about 1200 journals with a total of about half a million published articles), low-cost, open access publishing model, I’ve contacted SciELO and asked them how they achieve such low costs – figures that range below US$100 per article, a fraction of commercial publishers. Abel Packer, one of the founders of SciELO, was so kind to answer all my questions.

SciELO receives most of its articles from the participating journals in JATS XML and PDF. It takes that version and publishes it online, makes sure it is indexed in the relevant places (PubMed, Web of Science, etc.) and archives it for long-term accessibility. These services cost about US$67 (which are covered by the participating governments, not the authors). For other digital services such as a DOI, plagiarism checkers, altmetrics, etc. another US$4 are incurred. So bare-necessities-publishing costs are just over US$70 per article at SciELO.

However, this comparison is not quite fair, as only few publishers receive their manuscripts in XML. So for those journals, which do not have an authoring environment such as PeerJ‘s “Paper Now” in which you can write your paper and submit it in the right format, there will be costs associated with editors who handle manuscript submissions, peer-review, as well as generating all kinds of formats (XML, PDF, ePub, etc., including proofs going back and forth between authors and staff) from the submitted files (LaTex, Word, etc.). At SciELO (and their participating journals), these services, if chosen by the authors, average around another US$130. Taken together, the complete package from, say, MS Word submission to published article can thus be had for a grand total of just over US$200. However, if/once authors use modern authoring systems, where they write collaboratively on a manuscript that is formatted in, e.g., native XML, then publication costs drop to below US$80. On the other hand, if SciELO authors opt for English language services, submission services, an enhanced PDF version, a social media campaign, and/or data management services – all offered by SciELO for a fee – a cozy all-inclusive package will cost them almost US$600, but still a far cry from the 5k commercial publishers charge for their subscription articles.

If even the most comfortable publishing option with all the bells and whistles can be had for just under US$600, why do current publishers succeed in persuading authors and institutions to pay author processing charges (APCs) averaging around €2000? There is an easy answer to that: currently, each subscription article generates 5k in revenue for the publisher. as such, publishers will strive to approach this figure with their APCs to fight a drop in their revenues. If that is the case, one might ask, why the average figures are not closer to 5k? One reason may indeed by competition by new publishers offering APCs dramatically below 5k. However, I think there may also be another/additional reason: The numbers above appear to corroborate a conclusion from last July, that subscription paywalls may indeed incur a cost in the neighborhood of around US$3000 per article. Dropping these 3k in paywall costs from per article revenue targets of 5k, leads to approximately the average APC which these publishers were able to charge from the institutions studied in the Schimmer et al. white paper. In such a scenario, publishers would keep the per-article-profit of just under 2k roughly constant.

This then means, of course, that the only thing the proposed switch from subscriptions to APCs would do is increase the profit margins of corporate publishers from currently just shy of 40% to about 90% – any publisher’s wet dream. As I’ve outlined before, this is probably the only way to make the abysmal status quo even worse, as it wouldn’t fix any of the other problems we have, besides access, and would exclude the scholarly poor from publishing in the venues that secure a job. Unregulated, universal gold open access has to be avoided by any means necessary.

Like this:

Like Loading...
Posted on June 11, 2015 at 14:34 13 Comments
May09

Is this supposed to be the best Elsevier can muster?

In: science politics • Tags: copyright, Elsevier, publishing

Until today, I was quite proud of myself for not caving in to SIWOTI syndrome like Mike Taylor did. And then I read his post and caved in as well.

What gets us so riled up is Elsevier’s latest in a long list of demonstration of what they think of the intellectual capacities of their customers. It’s precisely because it is only one in a long row that I initially didn’t feel like commenting. However, there were so many points in this article worth rebutting and Mike only selected a few for his comment, I felt compelled to pick some of my own for dissection.

This combination of perspectives has led me, I believe, to a deeper understanding of the importance and limitations of copyright law

Great! An expert on copyright law and a creative mind, Shirley a worthy opponent for a mere scientist who understands next to nothing of copyright. As we say in Germany: many foes, much honor (I know, don’t ask!).

The STM journal publishing sector is constantly adjusting to find the right balance between researcher needs and the journal business model, as refracted through copyright.

I think that’s an accurate assessment, the right balance of course being to continuously expand copyright to rob scientist authors of any rights to their articles and allowing publishers to charge authors for every time they use their own figures in class: the researcher needs to use their figures in teaching and the journal business model needs to beat drug smuggling, arms trade and human trafficking in profit margins. Thus, the right balance from this perspective is to charge institutions for every additional use of the material they already paid for twice, absolutely. However, while maximizing profits may be the fiduciary duty of the corporate publisher, neither science nor society cares about the existence of corporations. On the contrary, openly parasitic corporations will be fought, so it’s difficult to see how alluding to the parasitic business model of his business (rather than, e.g., trying to hide it) is in the interest of the author. One probably has to have the creative mind of a poet and the honed analytic skills of a lawyer to see the advantage here.

Authors are looking for visibility and want to share their results quickly with their colleagues and others in their institutions or communities.

That’s also correct: it’s the reason there is a boycott of Elsevier and the open access movement exist. It’s not clear why the author of this article is using an argument against copyright in particular and the entire status quo in academic publishing in general in an article purporting to support copyright. Again, I’m probably neither creative enough nor versed in law well enough to understand the apparent own goals of this author.

Most journals have a long tradition of supporting pre-print posting and enabling “scholarly sharing” by authors.

I’m sure some journals have that tradition, but Elsevier’s journals are not among them. On digital ‘paper’ of course, Elsevier supports pre-prints and ‘green’ archiving, but if that isn’t just lip service, why pay two US lawmakers US$40,000 to make precisely this “scholarly sharing” (note the scare quotes!) illegal? Or is the author insinuating that the legal counsel of Elsevier had no role in drafting the RWA?

In fact, last week Elsevier released its own updated sharing policies

Wait a minute – Elsevier has released a set of policies which specify how scientists are allowed to talk about their research? How on earth is this supposed to convince anyone that copyright is good for science and scientists if a scientist first has to ask the approval of a commercial publisher before they start talking about their research? I went and read these policies; they essentially say: “break our copyright by tweeting your own article and we’ll sue you”. I guess I really lack the creativity and expertise to understand how this is in support of copyright.

I believe that copyright is fundamental to creativity and innovation because without it the mechanisms for funding and financial support for authors and publishing become overly dependent on societal largesse.

Given my lack of poetry and legal competence, I really have to think hard now. He’s writing that we as scientist authors shouldn’t be dependent on “societal largesse”. Science is, for the most part, a public enterprise. This means my salary (without any copyright) is paid by “societal largesse”. 80% of subscription income of Elsevier is from public institutions, so this author suckles 80% of his income from the teet of “societal largesse”. So he’s arguing that copyright helped prevent his own salary from going 100% societal? Or is he arguing that I should lose all my salary? If depending on “societal largesse” really is to be avoided, why doesn’t he give 80% of his salary (which is probably more money than 100% of my salary) back to the tax payer, perhaps by contributing to the open access funds of a public institution of his choice? Going after the salaries of your customers without displaying any willingness to give up your own dependence on “societal largesse” must be a strategy that requires a lot of creativity and legal competence, as from my unimaginative and incompetent perspective that strategy just backfired mightily.

The alternatives to a copyright-based market for published works and other creative works are based on near-medieval concepts of patronage, government subsidy,

“Societal largesse” and “government subsidy” are what looms without copyright? I thought the only thing that kept Elsevier alive was government subsidies enabled by societal largesse? Last I looked, open access publishing would cost something like US$100-200 per article if we implemented it right. Elsevier, on  average, charges about US$5,000 per subscription article. So, on average, for each subscription article, Elsevier is receiving at least $4,800 in government subsidies (which amounts to 96% of the total payment), solely to artificially keep this corporation alive. If the author were so keen on getting rid of government subsidies, why is he asking his customers to support a business practice that only exists because their income is 96% government subsidies? Indeed, I’m neither intelligent nor competent enough to understand this defense of copyright. To me, this article is an attack on the entire status quo.

I’m running out of time and honestly, whatever could come next would be difficult to change the impression I now got from reading thus far. Clearly, Elsevier thinks that their scientist customers are know-nothing hobos with an insufficient number of neurons for a synapse. Either they are correct as I for the life of me cannot find anything in support of copyright in this article, or their big shots suffer from a severe case of Dunning Kruger syndrome.

Like this:

Like Loading...
Posted on May 9, 2015 at 13:21 20 Comments
Apr29

A study justifying waste of tax-funds?

In: science politics • Tags: open access, publishing

Open Access (OA) pioneer and OA journal eLife founding member and sponsor, the Max Planck Society just released a white paper (PDF) analyzing open access costs in various countries and institutions and comparing them to subscription costs. Such studies are fundamental prerequisites for evidence-based policies and informed decisions on how to proceed with bitterly needed reforms. The authors confirm the currently most often cited ballpark figures of a world-wide annual academic publishing volume around US$10b, averaging at around US$5000 for each of the approximately 2 million papers published every year. This confirmation from different sources than are usually cited is very valuable and solidifies our knowledge on the kind of funds available to the system.

The authors detail that various institutions in various countries spend significantly less than the current subscription costs on their current author processing charges (APCs) for publishing in open access journals, around US$2000-3000 per article. They conclude from these data that a conversion from subscription to author-pays model would be at least cost-neutral (if not a significant cost-saver) and keep the publishing industry alive.

I find these statements quite startling for a number of reasons:

  1. Over 15 years ago, the US government (via the NIH) helped Brazil develop an incredibly successful publishing model, SciELO. It has since spread, with many other countries all over the globe joining. In their now roughly 900 journals, SciELO publishes peer-reviewed papers, fully open-access at an average cost of US$90 per article. Recently, these figures have been confirmed with numbers from the NIH’s open access repository PubMedCentral, where such costs come to lie around US$50 per article. Thus, publishing fully open access with all the features known from commercial publishers clocks in at below US$100 per article. This we already knew before this study. Why was there a study needed, that shows that we can also get such universal open access for up to 100 times the price of PMC/SciELO? Is the survival of the publishing industry really worth up to US$9.9b in subsidies every year? What value do publishers add, that could possibly be worth the annual bill of 9.9 billion in virtually any currency?
  2. The authors emphasize that “Whether calculated as mean or median, however, the average APC index will never be dictated by the high-end values.” This may of course be financially relevant for the tax-payer in the short-term, but in the long-term the tax-payer will also be interested in whether the science they fund is reliable: is publicly funded science a good bang for the buck? If we only were to convert to this ‘gold’ OA model and left everything else virtually unchanged, the situation for the reliability and hence credibility of publicly funded science would be even worse than it is today. As outlined in detail elsewhere, high-ranking journals argue that their APCs will come to lie around US$50,000 per article. While this may indeed not change the average cost to the taxpayer with currently in excess of 30,000 journals, it will mean that in addition to knowing the professional editor and, if needed, fake your data, you then also would have to be rich (or work at a rich institution) in order to publish in a venue that helps secure a job in science. Given that these journals publish the least reliable science, this would be the one single scenario I could imagine, that would be even worse for science than the status quo.
  3. The authors also do not mention that the large majority of open access journals (including Max Planck Society’s very own eLife) do not charge any APCs at all (an issue already raised by Peter Suber). It is not clear from the study if articles published in these journals have been counted at all. If not, their costs are overestimating the actual costs by a significant factor.

Thus, as I see it, this is a study that at best serves no real purpose, at worst constitutes a disservice to science by suggesting such a transition would even be desirable, when it clearly is not. I have asked one of the co-authors of the study, Kai Geschuhn to comment on my criticisms. You can find her reply below, I’ll leave it uncommented:

Like it or not, offsetting subscription costs against publication fees still isn’t the common understanding of how to finance open access. With this study, we didn’t want to raise the question whether scientific publishing should cost US$50, US$100 or US$5,000 per article. The aim rather was to show that the transition to open access is feasible already. The figures presented in the paper relate current subscription costs to scientific article outputs on different levels (global, national, and institutional) in order to show that there is enough money in the system to finance all of these articles. While this is obvious to you, it is often not to libraries which usually expect the open access transition to become even more expensive. This misconception is mostly due to the assumption, that the total number of publications from an institution or a country would have to be financed. We suggest calculating with articles from corresponding authors only, which usually leads to a reduction of up to 50% of the total amount.
After ten years of debate, we finally need to agree upon a realizable first step. We believe that offsetting budgets actually is key to this so we have to start the calculation.

Like this:

Like Loading...
Posted on April 29, 2015 at 17:14 17 Comments
Apr27

What should a modern scientific infrastructure look like?

In: science politics • Tags: infrastructure, open science, peer-review, publishing

For ages I have been planning to collect some of the main aspects I would like to see improved in an upgrade to the disaster we so euphemistically call an academic publishing system. In this post I’ll try to briefly sketch some of the main issues, from several different perspectives.

As a reader:

I’d like to get a newspaper each morning that tells me about the latest developments, both in terms of general science news (aka. gossip) as well as inside my scientific fields of interest. For the past 5+ years, my paper.li has been doing a pretty decent job at collecting the gossip, but for the primary literature relevant to my field, such a technology is sorely missing. I’d like to know which papers my colleagues are reading, citing and recommending the most. Such a service would also learn from what I click on, what I recommend and what I cite, to assist me in my choices. Some of these aspects are starting to be addressed by companies such as F1000 or Google Scholar, but there is no comprehensive service that covers all the literature with all the bells and whistles in a single place. We have started to address this by developing an open source RSS reader (a feedly clone) with a plug-in functionality to allow for all the different features, but development has halted there for a while now. So far, the alpha version can sort and filter feeds according to certain keywords and display a page with the most tweeted links, so it’s already better than feedly in that respect, but it is still alpha software. All of the functionalities I want, have already been developed somewhere, so we’d only need to leverage them for the scientific literature.

In such a learning service, it would also be of lesser importance if work was traditionally peer-reviewed or not: I can simply adjust for which areas I’d like to only see peer-reviewed research and which publications are close enough that I want to see them before peer-review – I might want to review them myself. In this case, peer-review is as important as I, as a reader, want to make it. Further diminishing the role of traditional peer-review are additional layers of  selection and filtering I can implement. For instance, I would be able to select fields where I only want recommended literature to be shown, or cited literature, or only reviews, not primary research. And so forth, there would be many layers of filtering/sorting which I could use flexibly to only see relevant research for breakfast.

I admit it, I’m a fan of Lens. This is an excellent example of how scientific content should be displayed on a screen. With a modern infrastructure, we get to choose which way we would like to read, Lens would not be the only option besides emulating paper. Especially when thorough reading and critical thinking are required, such as during the review of manuscripts or grant proposals, ease of reading and navigating the document is key to an efficient review process. In the environment we already should have today, reviewers would be able to pick the for them most efficient way of thoroughly fine-combing a document.

We would also be able to click on “experiments were performed as previously described” and then directly read the exact descriptions of how these experiments were done, because we would have finally have implemented a technology from 1968, hyperlinks. Fully implementing hyperlinks would also provide the possibility to use annotations to the literature: such annotations, placed while reading, can later be used as anchors for citations. Obviously, we’d be using a citation-typology in order to make the kind of citation (e.g. affirmative or dismissive, etc.) we intended machine readable.

Of course, I would also be able to double-click on any figure to have a look at other aspects of the data, e.g. different intervals, different intersections, different sub-plots. I’d be able to use the raw data associated with the publication to plot virtually any graph from the data, not just those the authors offer me as a static image, as today. How can this be done? This brings me to the next aspect:

As an author:

As an author, I want my data to be taken care of by my institution: I want to install their client to make sure every piece of data I put on my ‘data’ drive will automatically be placed in a data repository with unique identifiers. The default setting for my repository may be open and a CC0 license, or set manually to any level of secrecy I’m allowed to or intend. The same ought to be a matter of course for the software we write. In today’s day and age, institutions should provide an infrastructure that makes version-controlled software development and publishing seamless and effortless. And yet, we, the scientists, have to ask our funders for money to implement such technology. Likewise for authoring: we need online authoring tools that can handle and version-control documents edited, simultaneously, by multiple authors, including drag and drop reference managing. GDocs have been around for a decade if not more and FidusWriter or Authorea are pioneering this field for scientific writing, but we should already have this at our institutions by default today (with local copies, obviously).

If we had  such GitHub-like infrastructure, a figshare/DropBox  combo that took care of our data and an Authorea/FidusWriter authoring environment, we could routinely do what we have done as a proof of principle in our latest paper: When you write the paper, you don’t have to artificially design any actual figures any more. The authors just insert the code that calls the software to evaluate the linked, open data. This allows the reader to not only generate their own figures from different perspective from our data (as in Fig. 3 of our paper), they can also download all the code and data without asking us and without us having to jump through any extra hoops to make our code/data available – it all happens on the invisible back-end. Had we been able to use Authorea/FidusWriter, submission would even have been just a single click. I get furious every time I estimate the amount of time and effort I could save if this infrastructure were in place today, as it should be.

Another thing one could do with such an infrastructure would be to open up certain datasets (and hence figures) to contributions from other labs, e.g. to let others compare their own results with yours. We demonstrated this “hey look what we found, how does that look for you?” kind of functionality in Fig. 4.

More or less automated semantic tagging would allow us to leverage the full potential of semantic web technology in order to facilitate some of the features I haven’t yet been able to imagine.

As a reviewer:

A reviewer is a special kind of reader, quite obviously. As such, all the above-mentioned features would also benefit the reviewer. However, there is a feature that is special for the reviewer: direct, if need be anonymized discussions with the author of a manuscript or proposal under review. Of course, this discussion would be available with the final version of the paper, where appropriate. In this discussion, the reviewers (invited, suggested and voluntary) and authors would be working on a fully annotated version of the manuscript, significantly reducing the time required for reviewing and revising manuscripts. Editors would only ever come in to help solve any points of contention that cannot be resolved by reviewers/authors themselves. Some publishers already implement such discussion to some extent, but none that I know of use an authoring environment, as would be the rational solution.

As an evaluator:

There is no way around reading publications in order to evaluate the work of scientists. There are no shortcuts and no substitutes. Reading publications is a necessary condition, a conditio sine qua non, for any such evaluation. However, it is only a sufficient criterion in the best of all worlds. Only in a world without bias, misogyny, nepotism, greed, envy and other human shortcomings, would reading publications be sufficient for evaluating scientific work. Unfortunately, some may say, scientists are humans and not immune to human shortcomings. Therefore (and because the genie is out of the bottle), we need to supplement expert judgment with other criteria. These criteria, of course, need to be vetted by the scientific method. The current method of ranking journals and then ranking scientists according to where they know the editors of such journals is both anti-scientific and counter-productive.

If we had a fully functional infrastructure possible with today’s technology, we’d be able to collect data from each scientist with regard to their productivity (data, code, publications, reviews), popularity (downloads, media presence, citations, recommendations), teaching (hours, topics, teaching material) or service (committees, administration, development). To the extent that this is (semi-)automatically possible, one could even collect data about the methodological soundness of the research. If we, as a scientific community, hadn’t spent the last 20 years in a digital cave, we’d be discussing about the ethics of collecting such data, about how these data are or are not correlated with one another, about the degree of predictive power of some of these data for future research success and other such matters – and not about how we one day might be able to actually arrive in the 21st century.

—

All of the functionalities mentioned above (and many more I haven’t mentioned) are already being tried here and there to various degrees and in various combinations. However, as standalone products none of them are really going to ever be more than just interesting ideas, proofs of concept and demonstrations. What is required is an integrated, federated backbone infrastructure with a central set of standards, into which such functionalities can be incorporated as plug-ins (or ‘apps’). What we need for this infrastructure is a set of open, evolvable rules or standards, analogous to TCP/IP, HTTP and HTML, which can be used to leverage key technologies for the entire community at the point of development – and not after decades of struggle against corporate interests, legal constraints or mere incompetence.

It is also clear that this infrastructure needs to be built robustly. Such a core infrastructure cannot rely on project funds or depend on the whims of individual organizations, even if they be governments. In fact, the ideal case would be a solution similar or analogous to BitTorrent technology: a world-wide, shared infrastructure where 100% of the content remains accessible and operational even when 2/3 of the nodes go offline. Besides protecting scholarly knowledge against funding issues and the fickleness of individual organizations, such a back-end design could also protect against catastrophic regional devastations, be they natural or human made.

Such technology, I think that much is clear and noncontroversial, is readily available. The money, is currently locked up in subscription funds, but cancellations on a massive scale are feasible without disrupting access much and will bring in just over US$9b annually – more than enough to build this infrastructure within a very short timeframe. Thus, with money and technology readily available, what’s keeping the scientific community from letting go of antiquated journal technology and embracing a modern scholarly communication infrastructure? I’ve mentioned human shortcomings above. Perhaps it is also an all too human shortcoming to see the obstacles towards such a modern infrastructure, rather than its potential:

busy

Or, as one could also put it, more scientifically: “The square traversal process has been the foundation of scholarly communication for 400 years!”

@brembs "The square traversal process has been the foundation of scholarly communication for 400 years."

— Ian McCullough is @bookscout (@bookscout) April 27, 2015

UPDATE (11/04/2017): There has been a recent suggestion as to how such an infrastructure ma be implemented conceptually. It still contains the notion of ‘journals’, but the layered structure explains quite well how this may work:

Like this:

Like Loading...
Posted on April 27, 2015 at 13:50 211 Comments
Apr21

If only all science were this reproducible

In: own data • Tags: Drosophila, foraging, reproducibility

ResearchBlogging.orgFor our course this year I was planning a standard neurogenetic experiment. I hadn’t ever done this experiment in a course, yet, just two weeks ago I tried it once myself, with an N=1. The students would get two groups of Drosophila fruit fly larvae, rovers and sitters (they wouldn’t know which was which). About ten larvae from each group would be placed on one of two yeast patches on an agar plate. After 20 minutes, they would count the number of larvae in the first patch, the second patch and those in none of the patches, i.e, elsewhere on the plate:

Classic experiment designed by Marla Sokolowski, the discoverer of the rover/sitter polymorphism

Classic experiment designed by Marla Sokolowski, the discoverer of the rover/sitter polymorphism

In the example above, the scores would be 3, 3 and 4 for the rovers and 10, 0, 0 for the sitters.

Yesterday morning, before the course, when I was collecting the vials for the experiment, I saw that I had an additional vial of each stock where I had forgotten to remove the parent flies, such that the vials were completely overcrowded. Remembering that my last slide in the lecture for the course was a result from Kaun et al. 2007, where the authors had discovered that the behavior of the larvae was dependent on food quality, I felt we should try and see if this overcrowding had already deteriorated the food enough to show the effect Kaun et al. had described: that rovers became more like sitters and didn’t leave the food patch they were placed in any more.

So I grabbed the two overcrowded vials and decided to try something that usually would be a recipe for disaster: in an experiment never tried on students, alter the conditions such that you wouldn’t know the outcome yourself. The plan was that in case we wouldn’t see a strong difference between the two strains, I’d go and fetch the two vials I had prepared with the correct density of larvae and hence proper food quality. With these latter flies, at least, we should see the difference between rovers and sitters. Here are the data the 12 students collected on the blackboard from the experiments done on a single afternoon:

Photograph of the balckboard with all the data from the 12 students.

Photograph of the blackboard with all the data from the 12 students.

One can already see from the numbers alone that the difference between the strains becomes very clear in the low-density case (blue) while it is less pronounced in the case of the overcrowded larvae (white). But because the result so clearly matches the results from Kaun et al, I’ve plotted the data and compared them side-by side.

First the data from Kaun et al. which show that with low food quality (15% – corresponding to our high-density larvae), rover and sitter larvae consume about the same amount of food and both more than with perfect quality food (100% – corresponding to our low density conditions):

Data from Kaun et al. 2007 on the effect of food quality of food intake in rover and sitter fruit flies.

Data from Kaun et al. 2007 on the effect of food quality of food intake in rover and sitter fruit flies.

As we didn’t measure food intake, but just the location of the larvae, we’ll take the percentage of larvae on the first patch as a measure of staying and feeding instead of leaving the patch and searching for another food patch. That way, we got an all but identical graph:

Data from 12 students on a single afternoon

Data from 12 students on a single afternoon

This replication provided an opportunity to emphasize two general points: first, it is highly unusual to reproduce previously published data with such ease and to such an astonishing degree. Second, I reiterated what I had already said during the lecture: there is no genetic determinism. Even humble fruit fly larvae show that different genotypes do not necessarily mean different fate – it depends strongly on the environment, if any genotypic differences manifest themselves in phenotypic differences.


Kaun, K., Riedl, C., Chakaborty-Chatterjee, M., Belay, A., Douglas, S., Gibbs, A., & Sokolowski, M. (2007). Natural variation in food acquisition mediated via a Drosophila cGMP-dependent protein kinase Journal of Experimental Biology, 210 (20), 3547-3558 DOI: 10.1242/​jeb.006924

Like this:

Like Loading...
Posted on April 21, 2015 at 14:36 7 Comments
Apr14

Nature reviewers endorse hype

In: science politics • Tags: GlamMagz, publishing

In a paper published in Nature Neuroscience now over a year ago, the authors claimed to have found a very surprising feature, which was long thought to be a bug. In my blog post covering the hype in the paper and that in an accompanying news-type article, I wrote that today, it has become ever rarer that scientists admit to standing on the shoulders of giants, as we are not rewarded for referring to giants, but only for being giants ourselves. The blog post has triggered some email correspondence between a number of colleagues in or close to this field, at the end of which was the decision that two of us, Nicolas Stergiou and I, would contact the journal and inform them of the missing references in the two articles.

The first reply was that we ought to write a ‘letter to the editor’ instead of informally notifying the journal that there were some crucial references missing. So we sat down and wrote our letter (submitted February 10, 2015):

Dear Sir,

HHMI Investigator Michael Eisen recently wrote that we ought not only to come down hard on people who cheat but also on the far great number of people who overhype their results. He suggests that hyping should be punished just as fraud. This letter to the editor represents such an effort as we would like to report on such a case of hyping.

“Standing on the shoulders of giants” is what scientists say to acknowledge the work they are building on. It is a statement of humility and mostly accompanied by citations to the primary literature preceding the current work. In today’s competitive scientific enterprise, however, such humility appears completely misplaced. Instead, what many assume to be required in the struggle to survive is to convince everyone that they are the giant, the genius, the prodigy who is deserving of the research funds, the next position, tenure. The Nature Neuroscience article “Temporal structure of motor variability is dynamically regulated and predicts motor learning ability” by Wu et al. (doi:10.1038/nn.3616) with its accompanying news-type article “Motor variability is not noise, but grist for the learning mill” by Herzfeld and Shadmehr (doi:10.1038/nn.3633) from earlier this year clearly fall within this category. Both articles claim that the researchers have made the game-changing discovery that something long thought to be a bug in our movement system is actually a spectacular feature. It is argued that this discovery is such a huge surprise, because nobody in their right mind would have ever thought this “unwanted characteristic” to actually serve some purpose.

The problem with this line of argument is that probably most people in the field thought it should be obvious, even to be expected – and not surprising at all. Skinner is largely credited with the analogy of operant conditioning and evolution. This analogy entails that reward and punishment act on behaviors like selection is acting on mutations in evolution: an animal behaves variably and encounters a reward after it initiated a particular action. This reward will make the action now more likely to occur in the future, just as selection will make certain alleles more frequent in a population. Already in 1981, Skinner called this “Selection by Consequences“ (Science Vol. 213 no. 4507 pp. 501-504, DOI: 10.1126/science.7244649). Skinner’s analogy sparked wide interest, e.g. an entire journal issue (Behavioral and Brain Sciences 7(04), 1984), which later appeared in book form (The Selection of Behavior: The Operant Behaviorism of B. F. Skinner: Comments and Consequences. A. Charles Catania, Stevan R. Harnad, Cambridge University Press). Clearly, the idea that reinforcement selects from a variation of different behaviors is not a novel concept at all, but more than three decades old and rather prominent. This analogy cannot have escaped anybody working on any kind of operant/motor learning, except those seriously neglecting the most relevant literature. This interaction of variability and selection is a well-known and not overly complicated concept, based in evolutionary biology and psychology/neuroscience. Consequently, numerous laboratories have been studying various aspects of this interaction for a long time. Skinner’s projection was that increased behavioral variability leads to increased operant learning rates, just like increased mutations rates lead to increased rates of evolutionary change. More than a dozen years ago, Allen Neuringer showed this to be the case in rats (Psychonomic Bulletin & Review 2002, 9 (2), 250-258, doi: 10.3758/BF03196279), but there are studies in humans as well (Shea, J. B., & Morgan, R. B. (1979). Contextual interference effects on the acquisition, retention, and transfer of a motor skill. Journal of Experimental Psychology: Human Learning and Memory, 5, 179–187). That such variability is beneficial, rather than detrimental has been shown even in situations where the variability is so high, that the acquisition rate is reduced, but post-training performance is enhanced (Schmidt RA, Bjork RA (1992): New conceptualizations of practice: Common Principles in Three Paradigms Suggest New Concepts for training. Psychological Science, 3(4): 207-217).

Wu et al. confirm both Skinner’s conjecture as well as previously published reports (some cited above) that indeed the rate of learning in operant conditioning is increased in subjects where the initial variability in the behavior is higher. However, instead of citing the wealth of earlier work, Wu et al. claim that their results were surprising: “Surprisingly, we found that higher levels of task-relevant motor variability predicted faster learning”. Herzfeld and Shadmehr were similarly stunned: “These results provide intriguing evidence that some of the motor variability commonly attributed to unwanted noise is in fact exploration in motor command space.”

We regard it as highly unlikely that none of the seven authors in total should have never heard of Skinner or the work over the last four decades by many human movement scientists that have explored the temporal structure of human movement variability and its relationship with motor learning. The work by senior scientists such as Karl Newell, Michael Turvey, Richard Schmidt, and their students published in books and hundreds of journal articles is completely ignored, just as the work by several younger mid-career scientists such as Nick Stergiou, Jeff Hausdorff, Thurmon Lockhart, Didier Dilignieres, and many others. After a thorough review of this literature the authors may realize that their results are neither new nor novel. If indeed the authors were unaware of this entire section of literature so relevant to their own research, it would be an indictment in its own right.

Hence, in formal corrections of both articles, we would expect all mentions throughout both texts of how surprising these findings were, to be replaced with references including, but not limited to the works cited above.

Yours sincerely,

In hindsight, we probably ought to have mentioned that the research of course has merit and that there are a lot of valuable and exciting results in the paper, but that it is one specific aspect that is really not surprising at all. However, we wanted to make it brief and focused just on the hype. Maybe that was a mistake. Be that as it may, after about two months of peer review, our letter was rejected. Perhaps not so surprisingly, the peer reviewers apparently were the same ones that reviewed the original manuscript (at least one of them):

This is the basis on which I recommended publication, and I do not feel it needs a corrigendum.
After all, the data indicate that such high-ranking journals do not use very rigorous peer-review, leading to the least reliable science being published there. What was quite surprising, though, was the implicit support and even endorsement of the hype:
I agree that the article’s tone is a little more breathless than strictly required, but this is the style presently in vogue
and
The letter complains about “over-hyping” about certain claims made in the paper related to previous work. I have some sympathy with the letter writers on this front. However, I also have some sympathy for the authors, who understandably were trying to emphasize the novelty of their work.
It thus appears as if at least two of the three reviewers agreed with us, that the articles were hyping the results, but they don’t see anything wrong with hype. This means that hype is not just a problem of journalists any more. Hype is not just a problem of journals and GlamMagz any more. Hype has now arrived in the middle of the scientific community with explicit endorsement from reviewers. It looks like Michael Eisen’s call for hype-punishment will go largely unheeded.
You can download the complete reviews of all reviewers here (PDF), to make sure none of the quotes above were taken out of context.

Like this:

Like Loading...
Posted on April 14, 2015 at 10:21 32 Comments
Apr09

Why this GlamMag will likely not ask for my review again

In: personal • Tags: GlamMagz, peer-review, publishing

I really loathe reviewing for GlamMagz for two main reasons. For one, it’s hard to remain neutral: publication of a paper in my field in such a journal is beneficial both for the field and for the young people who are authors on this paper. Second, the demands of some of my colleagues so often make my blood boil. At that point I’m very happy these reviews are anonymous and I really don’t want to know the names of these colleagues. Here are some of the things the reviewers wrote in this most recent round:

“the authors have not convinced me of the conceptual novelty of their findings to warrant publication in a very top-tier journal”

“appears standard for a top-tier journal”

“However, currently, this manuscript is more suited for a specialized journal”

This seems to suggest to me that these colleagues apply different standards for different journals. So when the journal sent each reviewer the reviews of the other reviewers and asked for comments, one of my comments was the following:

The intuitive notion of journal rank is a figment of our imagination, devoid of any empirical support. So-called “top-tier journals” exist in the same reality in which homeopathy cures, dowsing rods find water and astrologers correctly predict the future. Asking for additional ‘interesting’ or ‘curious to know’ experiments merely because of the venue the authors chose borders on the unethical, IMHO, as there are millions of ‘curious to know’ experiments to be potentially carried out. Reviewers ought to try and avoid multiple standards in their reviews and restrain additional experiments to the minimum required by the statements of the authors. Professional editors are being paid precisely because they can predict whether their readers will find the statements of the authors ‘interesting’ or ‘curious to know’.

(Where ‘curious to know’ was a phrase one of the reviewers used whenever they wanted some tangentially related experiment to be done)

I doubt the editors at GlamMagz are very interested in these sorts of comments. If they are, I now have a boilerplate from which to draw future comments.

Like this:

Like Loading...
Posted on April 9, 2015 at 14:33 13 Comments
  • Page 12 of 21
  • « First
  • «
  • 10
  • 11
  • 12
  • 13
  • 14
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,715 views)
  • Sci-Hub as necessary, effective civil disobedience (22,933 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,437 views)
  • Booming university administrations (12,901 views)
  • What should a modern scientific infrastructure look like? (11,433 views)
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Motor learning at #SfN24
  • What is a decision?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d