bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions Twitter variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 26 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 138 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 440 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 701 downloads 0.00 KB
Download
Icon
How to improve motor learning in Drosophila 1887 downloads 0.00 KB
Download
Apr07

How gold open access may make things worse

In: science politics • Tags: icanhazpdf, infrastructure, open access, publishers

Due to ongoing discussions on various (social) media, this is a mash-up of several previous posts on the strategy of ‘flipping’ our current >30k subscription journals to an author-financed open access corporate business model.

I consider this article processing charge (APC)-based version of ‘gold’ OA a looming threat that may deteriorate the situation even beyond the abysmal state scholarly publishing is already in right now.

Yes, you read that right: it can get worse than it is today.

What would be worse? Universal gold open access – that is, every publisher charges the authors what they want for making the articles publicly accessible. Take the case of Emerald, a publisher which recently raised their APCs by a whopping 70%. When asked for the reason for their price-hike, they essentially answered “because we can“:

The decision, based on market and competitor analysis, will bring Emerald’s APC pricing in line with the wider market, taking a mid-point position amongst its competitors.

Quite clearly, publishers know their market and know how much they can extract from it (more to that below).

(UPDATE, 13.04.2018: There is data that also Frontiers is starting to milk the cash cow more heavily now, with APC price hikes of up to 40%, year over year)

Already a few years ago. a blog post by Ross Mounce described his reaction to another pricing scheme:

Outrageous press release from Nature Publishing Group today.

They’re explicitly charging more to authors who want CC BY Gold OA, relative to more restrictive licenses such as CC BY-NC-SA. Here’s my quick take on it: https://rossmounce.co.uk/2012/11/07/gold-oa-pricewatch

More money, for absolutely no extra work.

How is that different from what these publishers have been doing all these years and still are doing today? What is so surprising about charging for nothing? That’s been the modus operandi of publishers since the advent of the internet.

Why should NPG not charge, say, US$20k for an OA article in Nature, if they chose to do so? In fact, these journals are on record that they would have to charge around US$50,000 per article in APCs to maintain current profits (more like US$90,000 per article today, see update below).

If people are willing to pay more than 230k ($58,600 a year) for a Yale degree or over 250k ($62,772 a year) just to have “Harvard” on their diplomas, why wouldn’t they be willing to shell out a meager 90k for a paper that might give them tenure? That’s just a drop in the bucket, pocket cash. Just like people will go deep into debt to get a degree from a prestigious university, they will go into debt for a publication in a prestigious journal – it’s exactly the same mechanism.

If libraries have to let themselves get extorted by publishers because of the lack of support of their faculty now, surely scientists will let themselves get extorted by publishers out of fear they won’t be able to put food on the table nor pay the rent without the next grant/position. Without regulation, publishers can charge whatever the market is willing and able to pay. If a Nature paper is required, people will pay what it takes.

Speaking of NPG, they are already testing the waters of how high one could possibly go with APCs. While the average cost of a subscription article is around US$5,000, NPG is currently charging US$5,200 plus tax for their flagship OA journal Nature Communications. So in financial terms at least, any author who publishes in this journal becomes part of the problem, despite the noble intentions of making their work accessible. At this level, gold OA becomes even less sustainable than current big subscription deals.

Of course, this is no surprise. After all, maximizing profits is the fiduciary duty of corporate publishers. For this reason, the recent public dreaming about how one could just switch to open access publishing by converting subscription funds to APCs are ill-founded. Proponents may argue that the intent of the switch is to use library funds to cover APC charges for all published articles. This is a situation we have already had before. This is what happens when you allow professional publisher salespeople to negotiate prices with our unarmed and unsupported academic librarians – hyperinflation:

Given this subscription publisher track record (together with the available current evidence of double digit percentage APC increases by NPG, Emerald or Frontiers, or the already now above-inflation increase in APCs more generally), I think it is quite reasonable to remain somewhat skeptical that in the hypothetical future scenario of the librarian negotiating Big Deal APCs with publishers, the publisher-librarian partnership will not again be lopsided in the publishers’ favor.

The current scholarly publishing market is worth round US$10bn annually, so this is what publishers will shoot for in total revenue. In fact, if a lesser service (subscriptions) was able to extract US$10bn, shouldn’t a better service (open access) be able to extract 12 or 15bn from the public purse? Hence, any cost-savings assumed to come from corporate gold OA are naive and completely imaginary at this point.

In fact, if the current reluctance to cancel/not renew the more and more obsolete subscriptions is anything to go by, such Open Access Big Deals will be even more of a boon for publishers than subscriptions. The most cited reason for continued subscription negotiations and contracts is perceived faculty demand. One needs to emphasize that this demand here merely constitutes unwillingness to spend a few extra clicks or some wait time to get the article, in most cases. In contrast, when the contracts are about APCs, they do not concern read-access, but write-access. If a library were to not pay a Big APC Deal any more, it would essentially mean that their faculty would be unable to publish. Hence, if librarians now worry about the consequences of their faculty having to click a few extra times to get an article, they ought to be massively worried what happens when their faculty can’t publish in certain venues any more. Faculty response will be disproportionately more vicious, I’d hazard a guess.

One might argue that without library deals, the journals compete for authors, keeping prices down. This argument forgets that we are not free to choose where we publish: only publications in high-ranking journals will secure your job in science. These journals are the most selective of all journals. In the extreme cases, they only publish 8% of all submitted articles. This is an expensive practice as even the rejected articles generate some costs. It is hence not surprising that also among open access journals, APCs correlate with their standing in the rankings and hence their selectivity (Nature Communications being hence just a case in point). In fact, this relationship is the basis for pricing strategies at SpringerNature (the corporation that publishes the Nature brand): “Some of our journals are among the open access journals with the highest impact factor, providing us with the ability to charge higher APCs for these journals than for journals with average impact factors. […] We also aim at increasing APCs by increasing the value we offer to authors through improving the impact factor and reputation of our existing journals.” It is reasonable to assume that authors in the future scenario will do the same they are doing now: compete not for the most non-selective journals (i.e., the cheapest), but for the most selective ones (i.e., the most expensive). Why should that change, only because now everybody is free to read the articles? The new publishing model would even exacerbate this pernicious tendency, rather than mitigate it. After all, it is already (wrongly) perceived that the selective journals publish the best science (they publish the least reliable science). If APCs become predictors of selectivity because selectivity is expensive, nobody will want to publish in a journal without or with low APCs, as this will carry the stigma of not being able to get published in the expensive/selective journals.

There are even data to suggest that this is already happening. PLoS One and Scientific Reports (another Nature brand journal) are near identical megajournals, which essentially only differ in two things: price and the ‘Nature’ brand. If competition would serve to drive down prices, authors would choose PLoS One and shun Scientific Reports. However, the opposite is the case, falsifying the hypothesis that a gold Open Access market would serve to keep prices in check:

Proponents of the “competition will drive down prices” mantra will have to explain why their proposed method fails to work in this example, but would work if all journals operated in the same way. One could go one step further: just as scholars now are measured by the amount of research funds (grants) they have been able to attract for their research in a competitive funding scheme, it seems only consequential to then also measure them by the amount of funds they were able to spend on their publications in a competitive publications scheme, if the most selective journals are the ones charging the highest APCs: the more money one has spent on publications, the more valuable their research must be. In other words, researchers will actively strive to only publish in the most expensive journals – or face losing their jobs.

Also here, we are already seeing the first evidence of such stratification in terms of who can afford to pay  to publish in prestigious journals. A recent study showed that higher ranked (thus, richer and more prestigious) universities tend to pay more for open access articles in higher ranking journals, while authors from lower ranking institutions tend to publisher either in closed access journals or cheaper open venues. Thus, the new hierarchies are already forming, showing us how this brave new APC world will look like.

This, to me as a non-economist, seems to mirror the dynamics of any other market: the Tata is no competition for the Rolls Royce, not even the potential competition by Lamborghini is bringing down the prices of a Ferrari to that of a Tata, nor is Moët et Chandon bringing down the prices of Dom Perignon. On the contrary, in a world where only Rolls Royce and Dom Perignon count, publications in journals on the Tata or even the Moët et Chandon level will only be ignored. Moreover, if libraries keep paying the APCs, the ones who so desperately want the Rolls Royce don’t even have to pay the bill. Doesn’t this mean that any publisher who does not shoot for at least US$5k in their average APCs (better more) fails to fulfill their fiduciary duty in not one but two ways: not only will they lose out on potential profit, due to their low APCs, they will also lose market share and prestige. Thus, in this new scenario, if anything, the incentives for price hikes across the board are even higher than what they are today. Isn’t this scenario a perfect storm for runaway hyperinflation? Do unregulated markets without a luxury segment even exist?

Of course, if libraries refuse to pay above a certain APC level (i.e., price caps), precariously employed authors won’t have any other choice than to cough up the cash themselves – or face the prospect of flipping burgers. Coincidentally, price caps would entail that those institutions which introduce these caps, have to live with the slogan “we won’t pay for your Nature paper!”, so I wonder how many institutions will actually decide to introduce such caps and what this decision might mean for their attractiveness for new faculty.

One might then fall back on the argument that at least journal-equivalent Fiat will compete with Journal of Peugeot for APCs, but that forgets that a physicist cannot publish their work in a biology journal. Then one might argue that mega-journals publish all research, but given the constant consolidation processes in unregulated markets (which is alive and well also in the publishing market as was recently reported), there quickly won’t be many of these around any more. As a consequence, they are, again, free to increase prices. Indeed, NPG’s Scientific Reports has now overtaken PLoS ONE as the largest mega-journal, despite charging more than PLoS ONE, as shown in the figure above. No matter how I try to turn the arguments around, I only see incentives for price hikes that will render the new system just as unsustainable as the current one, only worse: failure to pay leads to a failure to make your discovery public and no #icanhazpdf or Sci-Hub can mitigate that. Again, as in all scenarios and aspects discussed above, also this kind of scenario can only be worse than what we have now.

In the end, it seems the trust in ‘market forces’ and ‘competition’ to solve these problems for us is about as baseless and misguided as the entire neoliberal ideology from which this pernicious faith springs.

At the very least, if there ever should be universal gold OA, the market needs to be heavily regulated with drastic, enforced, world-wide, universal price caps much below current article processing charges, or the situation will be worse than today: today, you have to cozy up with professional editors to get published in ‘luxury segment’ journals. In a universal OA world, you would also have to be rich. This may be better for the public in the short term, as they then would at least be able to access all the research. In the long term, however, if science suffers, so will eventually the public. In today’s world, one needs some tricks to read paywalled articles, such as Sci-Hub or #icanhazpdf or friends at rich institutions. In this brave new universal gold OA world, you need cold, hard cash to even be able to get read. Surely, unpublished discoveries must be considered worse than hard-to-read, but published discoveries?

Thus, from any perspective, gold OA with corporate publishers will be worse than even the dreaded status quo. [UPDATE I: After I wrote this post, the American Research Libraries posted an article pretty much along the same lines, emphasizing the reduced market power of the individual authors above and beyond many of the same concerns I have raised above. Clearly, a quick analysis by anyone will reveal the unintended consequences of merely ‘flipping’ existing journals to an APC-based OA format. UPDATE II: A few month after this post, a recent study by several research-intensive universities in the US also came to the same conclusions as the ARL and yours truly: “the total cost to publish in a fully article processing charge-funded journal market will exceed current library journal budgets”]

Obviously, the alternative to gold OA cannot be a subscription model. What we need is a modern scholarly infrastructure, around which there can be a thriving marketplace of services for these academic crown jewels, but the booty stays in-house. We already have many such service providers and we know that their costs are at most 10% of what the legacy publishers currently charge. How can we afford such a host of modern functionalities and get rid of the pernicious journal rank at the same time?

Institutions with sufficient subscription budgets and the motivation to reform will first have to coordinate with each other to safeguard the back issues. Surprisingly, there are still some quite substantial technical hurdles, but, for instance, a cleverly designed combination of automated, single-click inter-library loan, LOCKSS and Portico by the participating institutions, should be able to cover the overwhelming part of the back archives. For whatever else remains, there still is Sci-Hub et al.

Once the back-issues are made accessible even after subscriptions run out, a smart scheme of staged phasing out of big subscription deals will ensure access to most of these issues for at least 5 years if not more. In this time, some of the freed funds from the subscriptions can be used to pay for single article access for newly published articles. The majority of the funds would of course go towards implementing the functionalities which will benefit researchers to such an extent, that any small access issues seem small and negligible in comparison.

In conclusion, there is no way around massive subscription cuts, both out of financial considerations and to put an end to pernicious journal rank. If cleverly designed, most faculty won’t even notice the cuts, while they simultaneously will reap all the benefits. Hence, there is no reason why people without infrastructure expertise (i.e., faculty generally), should be involved in this reform process at all. Much like we weren’t asked if we wanted email and skype. At some point, we had to pay for phone calls and snail mail, while the university covered our email and skype use. At some point, we’ll have to pay subscriptions, while the university covers all our modern needs around scholarly narrative (text, audio and video), data and code.

It’s clearly not trivial, but the savings of a few billion dollars every year should grease even this process rather well, one would naively tend to think.

UPDATE III (14/12/2017): Corroborating the arguments above, a recent analysis from the UK, a country who has favored gold Open Access for the last five years, comes to the conclusion that “Far from moving to an open access future we seem to be trapped in a worse situation than we started“. I think it is now fair to say that gold open access is highly likely to  make everything worse (rather than ‘may‘ as in the title of this post). UPDATE to the update (08/05/2018): A Wellcome Trust analysis also found average APCs to rise between 7% and 11%, i.e., double to triple inflation rate, year over year. Clearly, if more and more gold OA journals indeed would lead to more competition, it’s not driving down prices, on the contrary.

UPDATE IV (19/12/2017): I just went back to Nature‘s old statement in front of the UK parliament, added about 6% in annual price increases (a bit above APC increases and a bit below subscription increases, see above) and arrived at about £22,600-£67,800 that Nature would need to charge for an article in their flagship journal, if they went gold open access today. At current exchange rates, this would amount to about US$30,000-90,000. Rounded out to about US$1000-2000 per impact point:

If one looks at Nature’s actual subscription increases from 2004 to today, they are much lower and amount to an increase of about 25%. This would bring us to pretty much exactly US$50,000 per article, at the current exchange rate. So depending on how one calculates, 30-90k per article for a high impact journal seems what has to be expected.

UPDATE V (14/09/2018): At the persistent request from a reader, I’m extending the discussion on the content of this post to a more suitable forum.

UPDATE VI (30/07/2019): Not surprisingly, students are being asked to co-pay the APCs:

Clearly, those who find tuition fees reasonable will argue that this is the best investment in their career that the PhD student can make. Those who argue that price sensitivity of authors will help bring down publisher prices will also find this very reasonable.

Like this:

Like Loading...
Posted on April 7, 2016 at 13:23 65 Comments
Mar23

Have you seen this response to terrorism anywhere?

In: Uncategorized • Tags: politics, terrorism

I usually don’t write about politics, but there has been one or the other exception to this rule in the last 12 years of this blog. This time, I’ve been missing one particular response to the various terrorist attacks in recent times, perhaps one of the few readers of this obscure blog has found it somewhere and can point me in this direction? I’m looking for something like this:

“Sadly, it is very difficult to completely prevent casualties such as those in the recent terror attacks in Madrid, London, Paris, Brussels or elsewhere, without violating basic human rights and abandoning hard-won liberties. Our ancestors have given their lives for these rights and liberties and we, as a society, are equally willing to pay that price. However, the victims of these horrific attacks have never been asked to give their lives. They were forced to become martyrs for our human rights and our civil liberties. It is infuriating and frustrating that there seems to be only little we can do to prevent such deaths. However, there are ~1.2M preventable deaths in Europe alone every year. We propose to do something about these lives instead. These fatalities are due to causes such as lung cancer, accidental injuries, alcohol related diseases, suicides and self-inflicted injuries. With even in the 1970s and 1980s terrorist-related fatalities never exceeding 500 per year, we will honor the lives of the victims of terrorism by initiating a program that will save at least 100 lives for every one that is being taken in a terrorist attack.

To reach this ambitious goal, we will start with increasing our efforts to prevent alcohol and tobacco-related deaths through effective public-health intervention programs as well as basic and applied biomedical research into the prevention, causes and treatment of these diseases and disorders. With about 30.000 annual fatalities in traffic-related accidents, we will also introduce European-wide speed limits, strong enforcement via speed-traps and an increased police force which collaborates across Europe. Drivers convicted of violating speed limits or DUI will have their driver’s licenses withdrawn for extended periods of time. We will increase our investments in the development of driverless vehicles. Should these activities fail to reach these goals, we will start targeting more areas. The individual projects will be named after the victims of terrorism, as a reminder of their forced contribution to the improvement of our open society. This program will be implemented on top of the intensified, heroic efforts of our law enforcement and intelligence agencies working hard to prevent such attacks within the bounds of our open society. We will strive to reach these goals in addition to our enduring political and diplomatic initiatives to mitigate the religious, socio-economic and political circumstances which can be used to recruit and motivate terrorists.

This death prevention program will not only protect our basic human rights and civil liberties, it will also benefit the economy in general and increase employment in particular. Through the additional investment in prevention, diagnosis and treatment, our public health systems will benefit long after any terrorist groups have ceased to exist. Our extra investment in basic and applied research will yield discoveries that will benefit all of humanity long after the last terrorist has sacrificed his life in vain. With our new program, every single terrorist attack will save the lives of countless more citizens than it has cost, turning terrorism into a net life-saving activity.”

If you have found an advocate for such a program, or one like it, please let me know where to find them, I’d like to support them.

Like this:

Like Loading...
Posted on March 23, 2016 at 10:20 15 Comments
Mar14

Seeking your endorsement

In: science politics • Tags: european commission, open science

I am contemplating to apply to join the European Commission Open Science Policy Platform. The OSPP will provide expert advice to the European Commission on implementing the broader Open Science Agenda. As you will see, some of us have a concern that the focus of the call is on organizations, not communities. This is a departure from much of the focus that the Commission itself has adopted on the potential benefits and opportunities of Open Science. A group of us are therefore applying as representatives of the community of interested and experienced people in the Open Science space.

Amongst others I am therefore asking for your endorsement, in the form of a comment on this post or email directly to me if you prefer, as someone who can represent this broader community of people, not necessarily tied to one type of organization or stakeholder. Depending on the number of endorsements, I will consider submitting my application. Deadline is march 22, 2016.

Application:

I have been urged to apply for a position on the advisory group ‘Open Science Policy Platform‘ as an individual representing the common interests shared by people and organizations from across the spectrum of stakeholders including doctors, patients and their organizations, researchers, technologists, scholarly IT service providers, publishers, policy makers, funders and all those interested in the change undergoing research. In addition to those directly involved in Open Science, I also represent the common interests shared by experimental scientists at public institutions, in particular those working in biomedical research, whether or not they are already engaging in Open Science themselves.

Many of us have a concern that the developing policy frameworks and institutionalization of Open Science is leaving behind precisely the community focus that is at the heart of Open Science. As the Commission has noted, one of the key underlying changes leading to more open practice in research is that many more people are becoming engaged in research and scholarship in some form. At the same time the interactions between this growing diversity of actors increasingly form an interconnected network. It is not only that this network reaches beyond organizational and sector boundaries but that it is precisely that blurring of boundaries is what underpins the benefits of Open Science.

I recognize that for practical policy-making it is essential to engage with key stakeholders with the power to make change. In addition I would encourage the Commission to look beyond the traditional sites of decision-making power within existing institutions to the communities and networks which are where the real cultural changes are occurring. In the end, institutional changes will only ever be necessary, and not sufficient, to support the true cultural change which will yield the benefits of Open Science.

I am confident I can represent the interests of this community, particularly by assisting in developments concerning the implementation of a cloud-based scholarly infrastructure supporting not only our text-based research outputs, but especially the integration of research data and scientific source code with the narrative, be it text, audio or video-based. I will also contribute evidence to policy decisions regarding research integrity.

I base my confidence on my track record covering the last 12 years. I have been involved in Open Science advocacy since about 2004. Since then, I have been invited speaker and keynote lecturer at numerous Open Science events every year. My advice is being sought by Open Access organizations such as the Public Library of Science, Force11, Frontiers, ScienceOpen, PeerJ or F1000. In fact, most of the recent F1000 innovations appear very similar to what I (and no doubt others) have proposed. I run an Open Science laboratory where all our source code and research data are being made openly accessible either immediately, as they are being created/collected, or upon publication/request. We have pioneered exploiting the advantages the infrastructure of our laboratory provides. For instance, we have collaborated with F1000Research to publish an article where the reader can not only choose the display format of the research data, or which aspect of the data should be displayed, but where they can also contribute their own data for comparison and extension of the published research.

My perspective is shaped not only by my interactions with fellow scholars, librarians or publishers. I also collect the available empirical data to objectively assess the state of the current scholarly infrastructure. One of the insights we have gained from this work is that the most prestigious scholarly journals publish the least reliable science. The practice of selecting scholars publishing in these prestigious journals arguably contributes to the unfolding replication crisis. Thus, a drop in research integrity has been observed in recent years, which can be traced back to inadequate, antiquated infrastructure, providing counter-productive incentives and reward structures. I will bring to the table the evidence-based perspective that our public institutions need a modern digital infrastructure, if our aim is to prevent further deterioration of research integrity and hence credibility. This position holds that the current, largely journal-based and publisher-provided infrastructure is not only counter-productive, but also unnecessarily wasteful. The evidence suggests that the global scholarly community stands to save ~US$9.8 billion annually if current subscription moneys were instead invested in a modern, institutional infrastructure. Such a transition would not only maintain current functionalities, it would also provide universal access to all scholarly knowledge. The saved funds would provide ample opportunities for acquiring new functionalities, provided, for instance, by emerging scholarly IT service providers, representatives of which will likely be among the experts on the Open Science Policy Platform. The saved funds would also allow implementation of a sustainable infrastructure ensuring long-term accessibility and re-use of research data as well as scientific source code. The common, federated standards and specifications of this infrastructure will overcome current fragmentation and enhance interoperability of all forms of scholarly output. Europe is spearheading the development of such an infrastructure. Given the proposed 6.15b€ for the European Cloud Initiative, the evidence suggests that the transition will likely be cost-neutral overall and potentially even cost-saving.

 

Like this:

Like Loading...
Posted on March 14, 2016 at 22:32 98 Comments
Mar08

How do academic publishers see their role?

In: science politics • Tags: publishers

Over the years, publishers have left some astonishingly frank remarks over how they see their role in serving the scholarly community with their communication and dissemination needs. This morning, I decided to cherry-pick some of them, take them out of context to create a completely unrealistic caricature of publishers that couldn’t be further from the truth. However, I’ll leave the links to the comments, so you can judge for yourself just how out of context they actually have been taken.

Essentially all of these comments were voiced on the blog of the Society for Scholarly Publishing, an organization representing academic publishers. For one of the commenters, Joseph Esposito, it is likely safe to assume that his continued presence as main contributor to the blog means that these viewpoints reflect the general viewpoints of the members of this association closely enough to not warrant dismissal from the site. The other quoted commenter, Sanford Thatcher, is not a contributor to the blog at all, so there is no direct way of estimating how representative his views are. Both commenters are or have been either publishers themselves or consult publishers in various roles.

  1. Publishers don’t add any value to the scholarly article:

Now you can find an article simply by typing the title or some keywords into Google or some other search mechanism. The Green version of the article appears; there is no need to seek the publisher’s authorized version.

Source.

2. Publishers’ business of selling scholarly articles to a privileged few is not negotiable

Screenshot via Mike Taylor

Screenshot via Mike Taylor

3. The purpose of academic publishers is to make money, not to serve the public interest:

It is not the purpose of private enterprises to serve the public interest; it is to serve the interests of their stockholders. On the other hand, it is the purpose of the federal government to serve the public interest.

Source.

4. Governments ought to serve the public interest by funding all scholarly communication:

you should be urging the government to better disseminate the results of the research it sponsors.

Source.

Let’s take these comments and completely mangle the impression publishers publicly express of themselves: “We don’t really have anything of value to contribute, but it is our non-negotiable fiduciary duty to make as much money off the public purse as possible. If you want to change that, you should take all the tax-money we’ve suckered you into handing over to us and build a sustainable scholarly communication infrastructure yourselves.” Couldn’t have said it better myself, actually.

Like this:

Like Loading...
Posted on March 8, 2016 at 11:47 36 Comments
Mar03

Academic publishers: stop access negotiations

In: science politics • Tags: esposito, open access, publishers

Three years ago, representatives of libraries, publishers and scholars all agreed that academic publishers don’t really add any value to scholarly articles. Last week, I interpreted Sci-Hub potentially being a consequence of scholars having become tired after 20 years of trying to wrestle their literature from the publishers’ stranglehold by small baby-steps and through negotiations and campaigning alone. Maybe the developments could be an indication that the frustration may be growing among scholars, readying them to break ranks with publishers altogether?

After 20 years of negotiations about how to realize universal open access to all scholarly literature with publishers, maybe it’s time to stop negotiations and develop an open access infrastructure without publishers? After all, it would save human lives as well as billions of dollars every year.

I had not anticipated support for the notion of stopping negotiations with publishers from the same person who also confirmed that publishers add little value to scholarly articles three years ago, Joseph Esposito. In his own words, Mr. Esposito is a “publishing consultant”, working for publishers involved in research publishing. He advises these companies on strategies concerning, among other issues, open access. Until this writing, he has penned 253 articles for the blog of the Society for Scholarly Publishing, an organization representing academic publishers. It is probably safe to assume that his continued presence at this blog after such a number of posts can be taken as an indicator that his opinions expressed there are generally not in obvious disagreement with those of the academic publishers as members of the society. His continued success as consultant to some of said society members can also be taken as an indication that his advice is being followed by his clients. In brief, the word of Mr. Esposito has carried and continues to carry some significant weight with publishers.

For the second time in three years, Mr. Esposito and I agree on something: we should stop negotiating access with legacy publishers:

Screenshot via Mike Taylor

Screenshot via Mike Taylor

Quite clearly (this is the full account of the entire comment, so it cannot be taken out of context), for Mr. Esposito, access to the scholarly literature is a privilege worth paying for. Moreover, he sees no need to negotiate this position any further. Inasmuch as this opinion instructs his advice to publishers, scholars should not be surprised, for instance, that publishers actively block contentmining and will not negotiate about this blockade of science. This opinion also reinforces my assessment that talking with legacy publishers, at this point, has become a complete waste of time. This is how far they are willing to go and no further concessions can be expected.

Like this:

Like Loading...
Posted on March 3, 2016 at 17:28 28 Comments
Feb25

Sci-Hub as necessary, effective civil disobedience

In: science politics • Tags: Elbakyan, publishing, sci-hub

Stevan Harnad’s “Subversive Proposal” came of age last year. I’m now teaching students younger than Stevan’s proposal, and yet, very little has actually changed in these 21 years. On the contrary, one may even make the case that while efforts like institutional repositories (green OA), open access journals (gold OA) or preprint archives have helped to make some of the world’s scholarly literature more accessible (estimated to now be at more than 40% of newly published papers), we are now facing problems much more pernicious than lacking access: most of our data and essentially all of our scientific source code is not being archived nor shared, our incentive structure still rewards sloppy or fraudulent scientists over meticulous, honest ones, and the ratchet of competing for grants just to keep the lights in the lab on is driving the smartest young minds out of academia, while GlamHumping marketeers accumulate.

While one may not immediately acknowledge the connection between access to the literature and the more pernicious problems I’ve alluded to, I’d argue that by ceding our control over our literature to commercial publishers, we have locked ourselves into an anachronistic system which is the underlying cause for most if not all our current woes. If that were indeed the case, then freeing us from this system is the key to solving all the associated problems.

Some data to support this perspective: we are currently spending about US$ 10b annually on legacy publishers, when we could publish fully open access for about US$200m per year if we only were to switch publishing to, e.g. SciELO, or any other such system. In fact, I’d argue that the tax payer has the right to demand that we use their tax funds only for the least expensive publishing option. This means it is our duty to the citizens to reduce our publishing expenses to no more than currently ~US$200m per year (and we would even increase the value of the literature by making it open to boot!). If we were to do that, we’d have US$9.8b every single year to buy all the different infrastructure solutions that already exist to support all our intellectual outputs, be that text, data or code. Without journals (why would one keep those?), we’d also be switching to different metrics to assist us in minimizing the inherent biases peer-review necessarily brings about. We would hence be able not only to provide science with a modern scholarly infrastructure, we could even use the scientific method to assist us in identifying the most promising new scientists and which of them deserve which kind of support.

While many of the consequences of wasting these infrastructure funds on publishers have become apparent only more recently, the indefensibility of ever-increasing subscription pricing in a time of record-low publishing costs, was already apparent 20 years ago. Hence, already in 1994, it became obvious that one way of freeing ourselves from the subscription-shackles was to make the entire scholarly literature available online, free to read. Collectively, these two decade-long concerted efforts of the global OA community, to wrestle the knowledge of the world from the hands of the publishers, one article at a time, has resulted in about 27 million (24%) of about 114 million English-language articles becoming publicly accessible by 2014. Since then, one single woman has managed to make a whopping 48 million paywalled articles publicly accessible. In terms of making the knowledge of the world available to the people who are the rightful owners, this woman, Alexandra Elbakyan, has single-handedly been more successful than all OA advocates and activists over the last 20 years combined.

Let that accomplishment sink in for a minute.

Of course it isn’t all global cheering and party everywhere. Obviously, the publishers complain that she used her site, Sci-Hub, to ‘steal their content‘ – with their content being, of course, the knowledge of the world that they have been holding hostage for a gigantic ransom. For 20 years this industry has thrived at the public teat, parasitizing an ever-increasing stream of tax-funded subsidies to climb from record profits to record profits, financial crises be damned. Of course, they are very happy to seize on this opportunity to distract from the real problems we’re facing, by staging a lawsuit to keep their doomed business practices running for yet a little longer. Perhaps more amusingly, one suggestion from the publishers of how to respond to Sci-Hub is to make access even more restrictive and expensive. I’ve only been around the OA movement for 10 years, but the ignorance, the gall and the sheer greed of publishers has astounded me time and time again. Essentially, in my experience, the only reply we ever got from publishers to our different approaches to reform our infrastructure, has been one big raised middle finger. Clearly, two decades of negotiations, talks and diplomacy have led us nowhere. In my opinion, the time to be inclusive has come and passed. Publishers have opted to remain outside of the scholarly community and work against it, rather than with it. Actions of civil disobedience like those of Aaron Swartz and Alexandra Elbakyan are a logical consequence of two decades of stalled negotiations and failed reform efforts.

In the face of multinational, highly profitable corporations citing mere copyright when human rights (“Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.”) are at stake, civil disobedience of the kind Sci-Hub is a great example of, becomes a societal imperative.

But even from within the OA community Alexandra Elbakyan is receiving some flak for a whole host of – compared to 48 million freed articles – tangential reasons, such as licensing, diluting the OA efforts, or scholarly societies. Of course, she reacted defensively, which is understandable for a host of reasons. However, one shouldn’t necessarily see these comments as criticism. They’re part of the analysis of the situation and this is what must happen continuously to monitor how we are doing. Just because Sci-Hub isn’t a panacea and solves all our problems for us so we can all go back and do actual science, doesn’t mean that the overall effort is any less heroic or impressive.

Part of this assessment has to be the clear realization that of course Sci-Hub is not the cure to our self-inflicted disease. However, given that 20 years of careful, fully legal, step-wise, evolutionary approaches have yielded next to nothing in return, more spectacular actions may be worth considering, even if they don’t entail the immediate realization of the ideal utopia. After all, two decades is not what I consider a timeframe evincing a lack of patience. I can’t believe anybody in the OA community will seriously complain that the single largest victory in a 20-year struggle doesn’t also solve all our other, associated problems in one fell swoop. Or let me frame that a little differently: once you can boast a track record of having freed 48 million articles, then you get to complain or criticize.

Part of our ongoing assessment also has to be the discussion of whether the investment in the baby-steps of the last two decades was worth the minuscule returns. Sci-Hub has the potential to encourage and inspire other academics to stand up to the status quo and demand effective reforms, maybe even taking action themselves. Sci-Hub clearly is not how one would design a scholarly infrastructure, but it has been more effective at accomplishing access than anything in the last 20 years.

Besides saving lives by making 48 million research papers accessible to patients and doctors, Sci-Hub to me signifies that the scientific community (well, admittedly, a tiny proportion of it), is starting to lose its patience and becomes ready for more revolutionary reform options. A signal that the community starts to feel that it is running out of options for evolutionary change. To me, Sci-Hub signals that publisher behavior, collectively, over the last two decades has been such a gigantic affront to scholars that civil disobedience is a justifiable escalation. Personally, I would tend to hope that Sci-Hub (and potentially following, increasingly radical measures) would signal that time has run out and that the scientific community is now ready to shift gears and embark on a more effective strategy for infrastructure reform.

Although I realize that it’s probably wishful thinking.

The freed articles, Alexandra Elbakyan’s David-like chutzpah against publishing Goliath Elsevier et al., as well as the deeply satisfying feeling of the public good not being completely helpless in the face of private monetary interests are the main factors why I am in awe of Alexandra Elbakyan’s accomplishment. If only the OA movement consisted of a few more individuals cut from that same wood, we might have never arrived at a point where Sci-Hub was necessary. I openly admit that I’m not even close to playing in that league and the realization hurts.

Analogies, metaphors and allegories always only go so far, but the parallels here are too numerous to ignore:

Finally, there still remains the question as to how Sci-Hub was able to obtain the credentials it uses to free the articles. As of this writing, not a whole lot is known, so for now we will have to assume that nobody was put in harm’s way. The size and probability of such potential harm may hypothetically influence the overall assessment of Sci-Hub, but at this point I would tentatively consider such potential negative consequences as minor, compared to the benefits.

Like this:

Like Loading...
Posted on February 25, 2016 at 18:35 57 Comments
Feb02

Earning credibility in post-factual science?

In: science politics • Tags: politics, post-factual science, society, truthiness

What do these two memes have in common?

falsememesWhile they may have more than one thing in common, the point important for now is that despite both having an air of plausibility or ‘truthiness’ around them, they’re both false: neither has Donald Trump ever said these words to People magazine, nor do gay canvassers have such an effect on people’s attitudes (even though the quoted statement about this research was indeed published in Science magazine).

The issue of false facts has become so rampant in current politics, that some have dubbed our common era “post-factual”. While on the surface, “post-factual science” appears to be an oxymoron, recent evidence raises the tantalizing possibility that at least the literature of the life-sciences, broadly speaking, may be on track to leaving the reality-based community (although more research is required):

Sources: https://journals.plos.org/plosbiology/article?id=info:doi/10.1371/journal.pbio.1002165 and https://science.sciencemag.org/content/349/6251/aac4716

Irreproducibility loosely defined as in “not replicated”, or “difficult to replicate with imprecise methods”. Note that not all of these are replication studies and not all of the replication studies are properly published themselves, with data available etc. To my knowledge, only the Open Science study is reliable in that respect. Sources: https://journals.plos.org/plosbiology/article?id=info:doi/10.1371/journal.pbio.1002165 and https://science.sciencemag.org/content/349/6251/aac4716

Obviously, this data can only hold for experimental sciences and even there I would expect huge differences between the different sub-fields. Nevertheless, the frequency of retractions in biomedicine is rising exponentially, the ‘majority of research findings’ was estimated and more recently supported (at least tentatively for the findings analyzed/replicated in the above six studies) to be false and the public trust in science is eroding. Maybe it is not too early to start asking the question: if we can’t even trust the scientific literature, what information is trustworthy? Or, put differently: if traditional (some would say legacy) strategies for assigning credibility are failing us, are there more modern strategies which can replace or at least support them?

Personally, I think this is one of the most urgent and important challenges of the post-internet globalized society. It may well be that science, which brought us the internet in the first place, may also be the right place to start looking for a solution.

Truth has never been an easy concept. Some may even argue that a large portion of humanity is probably quite happy with it being rather malleable and pliable. It wasn’t really until the 20th century that epistemologists have explicitly formulated a convention of what constitutes a scientific fact and how they can be used to derive and test scientific statements. Obviously, in life outside of science we are still far from an explicit convention, which complicates matters even further.

Be that as it may, neither within nor outside of science can we expect every individual to always fact-check, question and investigate every single statement or bit of information ever encountered, no matter how much that may be desired. There will always, inevitably have to be shortcuts and we have been taking them, legitimately, I would argue, since the beginning of humanity.

Initially, these shortcuts involved authority. With enlightenment and the shedding of both religious, societal and political shackles, a (competing?) more democratic and informal shortcut was conceived (without ever completely replacing authority): popularity. If a sufficiently large number of information outlets confirmed a statement, it was considered sufficient for believing it. Both shortcuts are still being used today both inside and outside of science, for very legitimate reasons. However, in part because the choice of authority is often arbitrary and may even be subject to partisan dispute, there are few, if any, universal authorities left. Even in science, the outlets with the most authority have been shown to publish the least reliable science. This erosion of the ‘authority’ shortcut accelerated with the dawn of the internet age: never before was it so easy to get so many people to repeat so many falsehoods with such conviction.

The ‘wisdom of the crowd’ seems to suggest that a  sufficiently large crowd can be at least as accurate as a small number of expert authorities, if not more so. Social media have the uncanny ability of always aggregating what subjectively feels like a “sufficiently large crowd” to solidly refute any and all authority, whether that would be on the moon landings, the 9/11 attacks, vaccination effectiveness/risk, climate change, crime/gun control, or spherical earth. Obviously, this not only constitutes a fatal misunderstanding of how the crowd becomes wise, it also contributes to an unjustified, exaggerated distrust in entities which do have significant expertise and hence credibility.

There are several reasons as to why social media, as currently implemented are notoriously incapable of getting anywhere near an ideal ‘wisdom of the crowd’ effect. For one, social feedback loops tend to aggregate people who think alike, i.e., reduce heterogeneity, when diversity is one of the most decisive factors in achieving crowd wisdom. Second, with our stone-age concept of a crowd, we may intellectually understand, but fail to intuitively grasp that any group of less than a few tens of millions is more of an intimate gathering rather than a crowd, on internet scales. With today’s social media implementation, it is comparatively easy to quickly gather a few million people who all share some fringe belief.

For instance, if we round the incidence of schizophrenia to about 1% of the population and assume, for simplicity’s sake, that all of them harbor either auditory hallucinations or some other forms of delusions. Hence, if 1% of all internet users were delusional in some way or form and only half of them aggregated in an online patient forum, we’d be talking about more than 15 million individuals. I’m pretty sure that such a patient site would deliver some quite astounding news feed with an amazing commentariat. And this is just one of a growing list of psychiatric disorders. How many nominally healthy users would feel compelled to believe the news from this site, re-share items and comment approvingly? Most far-out-there communities have orders of magnitudes less users but disproportionately large visibility. Indeed, some of those communities may appear just like a subforum of such a patient site, but without a single user actually being diagnosed with any psychiatric disorder whatsoever.

So if we want to take advantage of the micro-expertise of the individuals in a crowd, that crowd needs to be not only ‘sufficiently’ large for the task at hand, it more importantly needs to be sufficiently diverse, or size quickly becomes almost completely irrelevant. From numerous examples in science and elsewhere, it seems straightforward to argue that we need to harness the individual micro-expertise that anyone can have, without making the mistake to attribute this expertise to everyone. Authority alone cannot serve as a reliable shortcut for credibility, but neither can popularity alone. Here is an idea of how one might combine them.

I may be wrong, but at least for now I would argue that we probably cannot start from scratch, assigning every institution, organization and individual the same kind of credibility. We cannot and should not undo history and track records based in evidence: there are information sources that have a higher credibility than others.

Further, we probably need a score or at least ranks that get computed iteratively and recursively. For any individual or piece of information to gather points or climb ranks, there need to be arbiters that already have some credibility – another reason why we likely won’t be able to start with a level playing field. What is less clear is how such a scoring/ranking system ought to be designed: I somehow have the impression that it ought to be difficult to earn credibility, shouldn’t it? Of course, it’s usually always “innocent until proven guilty”, but is this a practical approach when doling out credibility? Should we all start with a credibility of 100 and then lose it? Or should we start with 0 and then gain? Does such a score have to go negative?

So far, these ideas have been very vague and general. Here are my first thoughts on how one may go about implementing such a system in science. A prerequisite for such a system is, of course, a modern scholarly information infrastructure. This won’t work with the 350 year-old technology we call ‘journals’.

Because we need diversity and inclusiveness, one would never prevent anybody from posting whatever they have discovered. However, if someone described a discovery from a known research institution, that discovery would receive more credibility than if it were posted by a layman without a track record (even though both scores would still be relatively low at the point of publication). Similarly, if the author list contained professors, the article would receive more credibility than if there were only graduate student authors. Yet more credibility would be assigned if the data and code relevant for the discovery were openly available as well. Once this initial stage had been completed, the document and its affiliated authors and institutions can earn even more credibility, for instance if the code gets verified, or the data checked for completeness and documentation. Those doing the verification and reviewing also need to be diverse, so also here there should not be a principle limit. However, the weight given to each verification (or lack thereof) will be different according to the scores of the person doing the verification.  More credit for reproducible data analysis (e.g. via docker et al.) and if the narrative accompanying the data/code is supported by them. This whole process would be similar to current peer-review, albeit more fine-grained, likely by more people (each contributing a smaller fraction of the reviewing work) and not necessarily on each and every article.

This process continues to accrue (or lose) credibility inasmuch as the article is receiving attention with consequences, e.g., how many views for each citation (in accordance with a citation typology, e.g. CiTO), how many views for each comment, endorsement or recommendation, etc. This is one possible way of normalizing for field size, another could be by analyzing citation networks (as in, e.g. RCR). Clearly, most credibility ought to be associated with the experiments in each article being actually reproduced by independent laboratories (i.e., a special kind of citation).

In this iterative process, each participant receives credit in various ways for their various activities. Credibility would be just one of several factors being constantly monitored. Points can be awarded both for receiving (and passing!) scrutiny from others as well as scrutinizing other people’s work. The resulting system is thought to allow everyone to check the track record of everyone else for some data on how reliable the work of this person (or institution, or community) has been so far, along with more details on how the track record was generated.

Obviously, there are plenty of feedback loops involved in this system, so care has to be taken to make these loops balance each other. The feedback loops found in many biological processes would serve as excellent examples of how to accomplish this. Complex systems like this are also known to be notoriously difficult to game.

Those are still very rough ideas, without a clear picture of the most suitable or effective implementation, yet, or whether the desired outcomes can actually be achieved in this way. I also have no good idea how one would take such a system to leverage it outside of science. I would like to hope, however, that by starting on the easier case of science, we may be able to approach a related system for the society at large.

Like this:

Like Loading...
Posted on February 2, 2016 at 15:05 6 Comments
Jan12

Even without retractions, ‘top’ journals publish the least reliable science

In: science politics • Tags: impact factor, journal rank, publishing, retractions

tl;dr: Data from thousands of non-retracted articles indicate that experiments published in higher-ranking journals are less reliable than those reported in ‘lesser’ journals.

Vox health reporter Julia Belluz has recently covered the reliability of peer-review. In her follow-up piece, she asked “Do prestigious science journals attract bad science?“. However, she only covered the data on retractions, not the much less confounded data on the remaining, non-retracted literature. It is indeed interesting how everyone seems to be attracted to the retraction data like a moth to the flame. Perhaps it’s because retractions constitute a form of ‘capital punishment’, they seem to reek of misconduct or outright fraud, which is probably why everybody becomes so attracted – and not just journalists, scientists as well, I must say. In an email, she explained that for a lay audience, retractions are of course much easier to grasp than complicated, often statistical concepts and data.

However, retractions suffer from two major flaws which make them rather useless as evidence base for any policy:

I. They only concern about .05% of the literature (perhaps an infinitesimal fraction more for the ‘top’ journals 🙂
II. This already unrepresentative, small sample is further confounded by error-detection variables that are hard to trace.

Personally, I tentatively interpret what scant data we have on retractions as suggestive that increased scrutiny may only play a minor role in a combination of several factors leading to more retractions in higher ranking journals, but I may be wrong. Indeed, we emphasize in several places in our article on precisely this topic that retractions are rare and hence one shouldn’t place so much emphasis on them, e.g.:
“These data, however, cover only the small fraction of publications that have been retracted. More important is the large body of the literature that is not retracted and thus actively being used by the scientific community.”
Given the attraction of such highly confounded data, perhaps we should not have mentioned retraction data at all. Hindsight being 20/20 and all that…

Anyway, because of these considerations, the majority of our article is actually about the data concerning the non-retracted literature (i.e., the other 99.95%). In contrast to retractions, these data do not suffer from any of the above two limitations: we have millions and millions of papers to analyze and since all of them are still public, there is no systemic problem of error-detection confounds.

For instance, we review articles that suggest that (links to articles in our paper):

1. Criteria for evidence-based medicine are no more likely to be met in higher vs. lower ranking journals:
Obremskey et al., 2005; Lau and Samman, 2007; Bain and Myles, 2005; Tressoldi et al., 2013

2. There is no correlation between statistical power and journal rank in neuroscience studies:
Figure 2:

Fig-2

3. Higher ranking journals tend to publish overestimates of true effect sizes from experiments where the sample sizes are too low in gene-association studies:
Figure 1C:

effectsizeoverestimation

4. Three studies analyzing replicability in biomedical research and found it to be extremely low, not even top journals stand out:
Scott et al., 2008; Prinz et al., 2011; Begley and Ellis, 2012

5. Where quality can actually be quantified, such as in computer models of crystallography work, ‘top’ journals come out significantly worse than other journals:
esp. Fig. 3 in Brown and Ramaswamy, 2007

chrystallography
After our review was published, a study came out which showed that

6. In vivo animal experimentation studies are less randomized in higher ranking journals and the outcomes are not scored more often in blind in higher-ranking journals either:

reporting

Hence, in these six (nine including the update below) areas, unconfounded data covering orders of magnitude more material than the confounded retraction data reveal only two out of three possible general outcomes:

a) Non-retracted experiments reported in high-ranking journals are no more methodologically sound than those published in other journals.
b) Non-retracted experiments reported in high-ranking journals are less methodologically sound than those published in other journals

Not a single study we know of (there may be some we missed! Let me know.) shows the third option of higher-ranking journals publishing the most sound experiments. It is this third option that at least one analysis should have found somewhere if there was anything to journal rank with regard to reliability.

Hence, even if you completely ignore the highly scattered and confounded retraction data, experiments published in higher ranking journals are still less reliable than those published in lower ranking journals – and error-detection or scrutiny has nothing to do with it.
In that view, one may interpret the observation of more retractions in higher ranking journals as merely a logical consequence of the worse methodology there, nothing more. This effect may then, in turn, be somewhat exaggerated because of higher scrutiny, but we don’t have any data on that.

All of this data is peer-reviewed and several expert peers attested that none of the data in our review is in dispute. It will be interesting to see if Ms. Belluz will remain interested enough to try and condense such much more sophisticated evidence into a form for a lay audience. 🙂

UPDATE (9/9/2016): Since the publication of this post, two additional studies have appeared that further corroborate the impression that the highest ranking journals publish the least reliable science: In the field of genetics, it appears that errors in gene names (and accession numbers) introduced by the usage of Excel spreadsheets are more common in higher ranking journals:

excel_errors

The authors speculate that the correlation they found is due to higher ranking journals publishing larger gene collections. This explanation, if correct, would suggest that, on average, error detection in such journals is at least not superior to that in other journals.

The second study is on the statistical power of cognitive neuroscience and psychology experiments. The authors report that statistical power has been declining since the 1960s and that statistical power is negatively correlated with journal rank (i.e., a reproduction of the work above, with an even worse outcome). Moreover, the fraction of errors in calculating p-values is positively correlated with journal rank, both in terms of records and articles (even though I have to point out that the y-axis does not start from zero!):

errorsThus, there are at least three additional measures in these articles that provide additional evidence supporting the interpretation that the highest ranking journals publish the least reliable science.

UPDATE II (9/5/2017): Since the last update, there has been at least one additional study comparing the work in journals with different impact factors. In the latest work, the authors compared the p-values in two different psychology journals for signs of p-hacking and other questionable research practices. Dovetailing the data available so far, the authors find that the journal with the higher impact factor (5.0) contained more such indicators, i.e., showed more signs for questionable research practices than the journal with a lower impact factor (0.8). Apparently, every new study reveals yet another filed and yet another metric in which high-ranking journals fail to provide any evidence for their high rank.

 

UPDATE III (07/03/2018): An edited and peer-reviewed version of this post is now available as a scholarly journal article.

Like this:

Like Loading...
Posted on January 12, 2016 at 10:09 228 Comments
Jan08

Just how widespread are impact factor negotiations?

In: science politics • Tags: impact factor, journal rank, publishing

Over the last decade or two, there have been multiple accounts of how publishers have negotiated the impact factors of their journals with the “Institute for Scientific Information” (ISI), both before it was bought by Thomson Reuters and after. This is commonly done by negotiating the articles in the denominator. To my knowledge, the first ones to point out that this may be going on for at least hundreds of journals were Moed and van Leeuven as early as 1995 (and with more data again in 1996). One of the first accounts to show how a single journal accomplished this feat were Baylis et al. in 1999 with their example of FASEB journal managing to convince the ISI to remove their conference abstracts from the denominator, leading to a jump in its impact factor from 0.24 in 1988 to 18.3 in 1989. Another well-documented case is that of Current Biology whose impact factor increased by 40% after acquisition by Elsevier in 2001. To my knowledge the first and so far only openly disclosed case of such negotiations was PLoS Medicine’s editorial about their negotiations with Thomson Reuters in 2006, where the negotiation range spanned 2-11 (they settled for 8.4). Obviously, such direct evidence of negotiations is exceedingly rare and usually publishers are quick to point out that they never would be ‘negotiating’ with Thomson Reuters, they would merely ask them to ‘correct’ or ‘adjust’ the impact factors of their journals to make them more accurate. Given that already Moed and van Leeuwen found that most such corrections seemed to increase the impact factor, it appears that these corrections only take place if a publisher considers their IF too low and only very rarely indeed if the IF may appear too high (and who would blame them?). Besides the old data from Moed and van Leeuwen, we have very little data as to how widespread this practice really is.

A recent analysis of 22 cell biology journals now provides additional data in line with Moed and van Leeuwen’s initial suspicion that publishers may take advantage of the possibility of such ‘corrections’ on a rather widespread basis. If any errors by Thomson Reuters’ ISI happened randomly and were corrected in an unbiased fashion, then an independent analysis of the available citation data should show that such independently calculated impact factors correlate with the published impact factor with both positive and negative errors. If, however, corrections only ever occur in the direction that increases the impact factor of the corrected journal, then the published impact factors should be higher than the independently calculated ones. The reason for such a bias should be found in missing numbers of articles in the denominator of the published impact factor. These ‘missing’ articles can nevertheless be found, as they have been published, just not counted in the denominator. Interestingly, this is exactly what Steve Royle found in his analysis (click on the image for a larger version):

if_negotiations

On the left, you see that any deviation from the perfect correlation is always towards the larger impact factor and on the right you can see that some journals show a massive number of missing articles.

Clearly, none of this amounts to unambiguous evidence that publishers are increasing their editorial ‘front matter’ both to cite their own articles and to receive citations from outside, only to then tell Thomson Reuters to correct their records. None of this is proof that publishers routinely negotiate with the ISI to inflate their impact factors, let alone that publishers routinely try to make classification of their articles as citable or not intentionally difficult. There are numerous alternative explanations. However, personally, I find the two old Moed and Van Leeuwen papers and this new analysis, together with the commonly acknowledged issue of paper classification by the ISI just about enough to be suggestive, but I am probably biased.

Like this:

Like Loading...
Posted on January 8, 2016 at 15:41 83 Comments
Jan07

How much should a scholarly article cost the taxpayer?

In: science politics • Tags: infrastructure, publishing

tl;dr: It is a waste to spend more than the equivalent of US$100 in tax funds on a scholarly article.

Collectively, the world’s public purse currently spends the equivalent of US$~10b every year on scholarly journal publishing. Dividing that by the roughly two million articles published annually, you arrive at an average cost per scholarly journal article of about US$5,000.

Inasmuch as these legacy articles are behind paywalls, the average tax payer does not get to see what they pay for. Even worse for academics: besides not being able to access all the relevant literature either, cash-strapped public institutions are sorely missing the subscription funds, which could have modernized their digital infrastructure. Consequently, researchers at most public institutions are stuck with technology that is essentially from the 1990s, specifically with regard to infrastructure taking care of their three main forms of output: text, data and code.

Another pernicious consequence of this state of affairs: institutions have been stuck with a pre-digital strategy for hiring and promoting their faculty, namely judging them by the venues of their articles. As the most prestigious journals publish, on average, the least reliable science, but the scientists who publish there are awarded with the best positions (and are, in turn, training their students how to publish their unreliable work in these journals), science is now facing a replication crisis of epic proportions: most published research may possibly be false.

Thus, both the scientific community and the public have more than one reason to try and free some of the funds currently wasted on legacy publishing. Consequently, there are a few new players on the publishing market who offer their services for considerably less. Not surprisingly, in developing countries, where cash is even more of an issue, already more than 15 years ago a publicly financed solution was developed (SciELO) that publishes fully accessible articles at a cost of between US$70-200, depending on various technical details. In the following 15 years, problems have accumulated now also in the richer countries, prompting the emergence of new publishers. Also for these, the ballpark price range from just under US$100 to under US$500 per article is quoted by some of these newer publishers/service providers such as Scholastica, Ubiquity, RIO Journal, Science Open, F1000Research, PeerJ or Hindawi. Representatives of all of these publishers independently tell me that their costs per article range in the low hundreds and Ubiquity, Hindawi and PeerJ are even on record with this price range. [After this post was published, Martin Eve of the Open Library of the Humanities also quoted roughly these costs for their enterprise. I have also been pointed to another article who sets about US$300 per article as an upper bound, also along the lines of all the other sources.]

Tweet link.

Now, as a welcome confirmation, yet another company, Standard Analytics, comes to similar costs in their recent analysis.

Specifically, they computed the ‘marginal’ costs of an article, which they define as only taking “into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations“. I understand this to mean that if an existing publisher wanted to start a new scholarly journal, these would be the additional costs they would have to recoup. The authors mention five main tasks to be covered by these costs:

1) submission

2) management of editorial workflow and peer review

3) typesetting

4) DOI registration

5) long-term preservation.

They calculate two versions of how these costs may accrue. One method is to outsource these services to existing vendors. They calculate prices using different vendors that range between US$69-318, hitting exactly the ballpark all the other publishers have been quoting for some time now. Given that public institutions are bound to choose the lowest bidder, anything above the equivalent of around US$100 would probably be illegal. Let alone 5k.

However, as public institutions are not (yet?) in a position to competitively advertise their publishing needs, let’s consider the side of the publisher: if you are a publisher with other journals and are shopping around for services to provide you with an open access journal, all you need to factor in is some marginal additional part-time editorial labor for your new journal and a few hundred dollars per article. Given that existing publishers charge, on average, around €2,000 per open access article, it is safe to say that, as in subscription publishing, scientists and the public are being had by publishers, as usual, even in the case of so-called ‘gold’ open access publishing. These numbers also show, as argued before, that just ‘flipping’ our journals all to open access is at best a short-term stop-gap measure. At worst, it would deteriorate the current situation even more.

Be that as it may, I find Standard Analytics’ second calculation to be even more interesting. This calculation actually conveys an insight that was entirely new, at least for me: if public institutions decided to run the 5 steps above in-house, i.e., as part of a modern scholarly infrastructure, per article marginal costs would actually drop to below US$2. In other words, the number of articles completely ceases to be a monetary issue at all. In his critique of the Standard Analytics piece, Cameron Neylon indicated, with his usual competence and astuteness, that of course some of the main costs of scholarly communication aren’t really the marginal costs that can be captured on a per-article basis. What requires investment are, first and foremost, standards according to which scholarly content (text/audio/video: narrative, data and code) is archived and made available. The money we are currently wasting on subscriptions ought to be invested in an infrastructure where each institution has the choice of outsourcing vs. hiring expertise themselves. If the experience of the past 20 years of networked digitization is anything to go by, then we need to invest these US$10b/a in an infrastructure that keeps scholarly content under scholarly control and allows institutions the same decisions as they have in other parts of their infrastructure: hire plumbers, or get a company to show up. Hire hosting space at a provider, or put servers into computing centers. Or any combination thereof.

What we are stuck with today is nothing but an obscenely expensive anachronism that we need to dispense of.

By now, it has become quite obvious that we have nothing to lose, neither in terms of scholarly nor of monetary value, but everything to gain from taking these wasted subscription funds and investing them  to bring public institutions into the 21st century. On the contrary, every year we keep procrastinating, another US$10b go down the drain and are lost to academia forever. On the grand scheme of things, US$10b may seem like pocket change. For the public institutions spending them each year, they would constitute a windfall: given that the 2m articles we currently publish would not even cost US$4m, we would have in excess of US$9.996b to spend each year on an infrastructure serving only a few million users. As an added benefit, each institution would be getting back in charge of their own budget decisions – rather than having to negotiate with monopolistic publishers. Given the price of labor, hard- and software, this would easily buy us all the bells and whistles of modern digital technology, with plenty to spare.

Like this:

Like Loading...
Posted on January 7, 2016 at 13:42 90 Comments
  • Page 10 of 21
  • « First
  • «
  • 8
  • 9
  • 10
  • 11
  • 12
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (23,961 views)
  • Sci-Hub as necessary, effective civil disobedience (22,947 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,484 views)
  • Booming university administrations (12,906 views)
  • What should a modern scientific infrastructure look like? (11,457 views)
  • Edgewise
  • Embrace the uncertainty
  • We are hiring!
  • By their actions you shall know them
  • Research assessment: new panels, new luck?
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

The Drosophila Flight Simulator 2.0
The Drosophila Flight Simulator 2.0

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).
%d