bjoern.brembs.blog

The blog of neurobiologist Björn Brembs

Search

Main Menu

  • Home
  • About
  • Publications
  • Citations
  • Downloads
  • Resume
  • Interests
  • Contact
  • Archive

Tag Cloud

aPKC behavior brain career chance classical competition conditioning data decision-making Drosophila Elsevier evolution FoxP free will fun funders GlamMagz impact factor infrastructure journal rank journals libraries mandates neurogenetics neuroscience open access open data open science operant peer-review politics postdoc poster publishers publishing retractions SciELO science self-learning SfN spontaneity subscriptions variability video

Categories

  • blogarchives
  • I get email
  • news
  • own data
  • personal
  • random science video
  • researchblogging
  • science
  • science news
  • science politics
  • server
  • Tweetlog
  • Uncategorized

Recent Downloads

Icon
Motor learning in fruit flies: what happens where and how to improve it 336 downloads 0.00 KB
Download
Icon
Investigating innate valence signals in Drosophila: Probing dopaminergic function of PPM2 neurons with optogenetics 108 downloads 0.00 KB
Download
Icon
Rechnungshof und DEAL 209 downloads 0.00 KB
Download
Icon
Are Libraries Violating Procurement Rules? 524 downloads 0.00 KB
Download
Icon
Comments from DFG Neuroscience panel 775 downloads 0.00 KB
Download
Oct03

Why can Elsevier keep insulting scholars without consequences?

In: science politics • Tags: Elsevier, publishers

Academic publishers in general and Elsevier in particular have a reputation for their ruthless profiteering, using professional negotiators pitting hapless librarians against their own faculty during journal subscription negotiations. Consequently, these companies boast profit margins of over 40%, when the industry average for all periodicals hovers around 5%.

One strategy that has worked exceedingly well is to insult the intelligence of their customers. There are many examples, but the classic surely must be to raise prices so much, that a steep discount makes price increases of double the inflation rate look like a bargain to the cornered librarian:

As if to demonstrate the mindset of academic publishers that scholars lack the intellectual resources to see through this strategy, publisher consultant Joe Esposito felt it was necessary to explain these rather obvious shenanigans.

As outlined previously, this condescending assumption that scholars lack the neuronal wherewithal to  understand and hence counteract publisher exploitation is a recurrent experience when interacting with academic publishers both in person, written and in publisher behavior. A particularly galling example among the already mentioned and linked ones are ‘green’ embargoes, i.e., that the author version of a subscription article cannot be posted until a certain period of time has passed. These embargoes are implicit admissions that without such embargoes, nobody would be willing to pay for the work the publisher has added to the author version. In other words, the publishers add nothing of value to a scholar’s work, and yet, scholars keep agreeing to publisher embargoes and keep paying obscene subscription fees. Looking at the financials of these corporations, it does not seem like this kind of behavior has hurt their bottom lines in any way.

Elsevier in particular is always a gold mine of such obvious insults of scholars. The latest point in case is their ‘vision‘ of a transition to open access. I commented that this short article manages to insult the intelligence of scholars in three different ways (click to enlarge):

I took a screenshot, because I suspected it would not get approved by Elsevier, which of course it didn’t. [UPDATE, 04-10-2017: Apparently the comment is being held in moderation until it can be posted together with a response from Elsevier] They offered to post a redacted version which I declined. The comments that are already posted on the article before mine quite clearly show that not everyone feels insulted.

Interestingly, Elsevier quickly responded on Twitter, where I had posted the screen-shot comment. However, before I go into what aspects Elsevier did respond to, I should emphasize the much more important aspects Elsevier did not choose to immediately engage with:

  1. The most important aspect of the comment is that the ‘vision’ aims to increase prices, when they are already 90% over what we should be paying, if publishers wouldn’t use their monopolies to extort an obscene subscription ransom. Elsevier did not choose to engage on that front.
  2. What they also didn’t address first was that Elsevier has a long track record of trying to prevent or at least stall the transition to open access by various regulatory means. One constant target has been the practice of deposition of author manuscripts in institutional repositories (aka. ‘green open access). Not only are most Elsevier manuscripts under an embargo, Elsevier also paid about US$40,000 to lawmakers in the US to sponsor legislation that would make this green route to open access illegal. In their vision, they proposed another angle to hamper access, geoblocking – as if these repeated attempts at stalling or preventing access weren’t part of the public record. This consistent resistance to making anything public should be anathema to any ‘publisher’ and we as scholars are obviously too intellectually challenged to notice.
  3. They also did not react to my statement that they rely essentially exclusively on subsidized labor for their quality control (i.e., peer review), which means it can’t factor in as a cost. And yet, as if assuming scholars were sheepish cash cows only there to be milked for corporate profits, we should yet accept that ensuring the “quality and integrity of the scientific record” is something we should expect to pay more for than today.

Besides all of the points to potentially react to, Tom Reller, head of Corporate Relations for Elsevier, took the time to send off a tweet on this particular issue:

He was referring to the six fake journals Elsevier published until 2005 (i.e., 12 years ago, discovered in 2009). I had raised this issue in a clause referring to the integrity of the scientific record. The six fake journals were part of a stealth marketing campaign funded my Merck in the guise of peer-reviewed scholarly journals. These journals were then distributed for free to medical doctors to get them to prescribe Merck products on the basis of the purported ‘scholarly literature’. In other words, Merck paid Elsevier to publish Merck advertisements that were designed to look like scholarly journals. As if to prove that one can risks patients’ health and insult scholars by faking journals without any major consequences, the only repercussion Elsevier faced was that they had to publicly apologize in 2009, when the scandal broke. It is this apology that Mr. Reller referred to in his next tweet:

 

Elsevier never organized arms fairs? Well, let’s see what Google has to say about that (click to enlarge):

I think it is quite clear that ‘Elsevier’ shows up quite a bit when you search for “Elsevier arms trade”. However, you also see that it comes with another name, “Reed”. “Reed Elsevier” (now RELX) was the parent company of Elsevier. So technically, Mr. Reller is correct that the Elsevier branch of Reed Elsevier didn’t themselves organize arms trade: Elsevier outsourced this job to a sister company in the same corporation. I’m sure every scholar is now equally convinced as Mr. Reller that Elsevier was just as upset over Reed Elsevier boosting arms sales and simultaneously selling health journals as anybody else.

Mr. Reller also expressed the sentiment that buying politicians for small amounts of money (~US$40k in this case) to sponsor legislation that makes open access illegal is “ok”. I am somewhat more hesitant to assume that scholars cannot find anything wrong with a ‘publisher’ trying to bribe politicians into making public access illegal.

Finally, Mr. Reller is correct that Elsevier have never been convicted of any ‘price gouging’ (if this term even exists in a legal sense). In denying price gouging, however, Mr. Reller assumes scholars cannot do simple arithmetic. Elsevier’s revenue can be easily discovered, about US$3 billion. Roughly 75% of this revenue is said (reference, thanks to Christian Gutknecht in the comments!) to  come from public sources, i.e., about US$2.25 billion. Mr. Reller himself tells us that Elsevier publishes about 400,000 articles annually. These numbers tell us that the public is paying about US$5625 for each Elsevier article. This is about the same amount estimated for any scholarly article world-wide. Actual costs for publishing range anywhere between less than US$100-500, depending on various factors. Thus, simple arithmetic tells us that Elsevier charges about ten times above their actual costs of making an article public. That may not fall under any jurisdiction for price gouging and Elsevier certainly never had to publicly apologize for their outrageous behavior.

Maybe Elsevier is right: scholars are stupid and will continue to oversee these insults, while Elsevier is laughing all the way to the bank?

[UPDATE: 06-10-2017] An unidentified technical issue prevented Mr. Reller’s comment from being posted, so I am posting it straight to the post instead. I don’t think it needs any additional comments from me:

Dear Björn, I’m sorry you are so disturbed by this, but your comments here and on Twitter and your blog all reflect an inaccurate view of our past and current business. For starters, there are no grounds to accuse us of anti-competitive activities. As a large player in the sector, our business practices have been reviewed in many markets (usually in the context of journal and company acquisitions), and we’ve been given a clean bill of health in every instance. The truth is, scholarly publishing is a vibrant market with lots of choices and one we’re excited to compete openly and fairly in.

On geo-blocking, there is no such proposal, nor was that the point of the piece. The piece as a whole looks at the transition to open access and where the challenges are. It is the broader questions raised in the piece that need to be addressed by all stakeholders before getting into details of what models might look like. We do, constructively and positively, propose two possible ways of helping Europe meet its ambitions for gold OA when the world is not united around one single model. But we do not go into any details around how these models might work, as there are broader questions that need to be answered first.
The Australasian fake journals you refer to were produced by between four to six employees working for a local Excerpta Medica office in Australia, outside the operation of our traditional journal business. Those were pharma marketing magazines that were common to that market, with very limited free print distribution and published prior to improvements in disclosure protocols (post Vioxx). Still, they lacked proper oversight and while they were full of sponsor’s advertisements (hardly ‘stealth’ as you suggest), they didn’t meet our global standards for disclosure, and we regretted their production (we sold the EM business for strategic reasons back in 2011).
Regarding your comment about the usage of publicly funded scholarly labor, we only charge for the content we’ve added value to. Any publicly funded content that hasn’t been voluntarily submitted to us for publication is owned by the author (or funder), who is free to disseminate it in any manner they wish. What we all have to acknowledge is that authors continue to send us and other publishers their content to treat and publish in growing numbers each year – thus, validating our value-adding activities, which includes protecting the quality and integrity of the scholarly record. Preprints are more accessible every day through the rise of preprint servers and free-to-low cost access programs.

You suggest that we paid politicians, presumably referring to the US, where donations to political campaigns are commonplace, highly regulated and totally transparent. Many US universities and their trade associations also have lobbyists and political action committees. We support the election campaigns of thought leaders on many of the public policy issues we follow. American lawmakers share vastly different views on thousands of issues and draft or support legislation according to how strongly they feel about a given issue.
On your blog, you accused us of owning an arms trade fair (10 years ago), to which we pointed out that Elsevier never owned that show – a sister exhibitions business did. Associating us with that show is akin to blaming a math department for something an athletic department did. We’re all different businesses within a holding company. We all work to share back-office costs and infrastructure, but we’re not responsible for each other’s business activities. In fact, we at Elsevier deserve credit for listening to our community and convincing our parent company and sister business to exit that show.

Lastly, I see you’re making various attempts at calculating actual costs vs revenues of our journals business. Such attempts will always be inaccurate as you’re not considering that we have a broadly diversified business that involves costs and revenues from a wide variety of product lines other than journals.
I don’t see how our efforts to provide more transparency into how we view the marketplace is insulting to anyone, but that’s for you to decide for yourself. I personally think suggesting we think authors and customers who use our services are stupid is well, just that. We’ll continue our ongoing dialogue with the community in the hopes that you and others will have a more accurate view of our contributions to science. Thank you.

Posted on October 3, 2017 at 11:05 10 Comments
Sep27

With the access issue temporarily solved, what now?

In: science politics • Tags: infrastructure, open access

After almost 25 years since Stevan Harnad’s “subversive proposal“, now, finally, scholars and the public have a range of avenues at their disposal to access nearly every scholarly article. Public access, while not the default, has finally arrived. Granted, while all of the current options are considered legal for the reader, not all providers of scholarly literature conform to every law in every country. Thus, the current state of universal access to scholarly works can only be considered a time-limited transition period. In other words, a window of opportunity: we now have access to everything, what do we do next?

Obviously, the first thing we should do is party! This is a milestone that I don’t think has been appreciated enough. For probably the first time in human history, nearly all scholarly articles are publicly accessible! Why isn’t that on national news everywhere?

Once recovered from the hangover, though, what would be a next step, given that the current state of bliss likely won’t last for ever?

There are those who don’t see full access to the scholarly literature as much of a deal. They do have some very valid points, such as the accessible version perhaps not being the official version of record, the licenses of the works varying hugely and being most often very restrictive with regard to re-use, or that the scholarly works are difficult or impossible to content mine. Adding that the current, imperfect access is only temporary, and there is a huge reason to not just leave things the way they are now.

Clearly, all of these points need fixing and if the current state of access isn’t really all that relevant, as may be claimed, then there is no reason to abandon the drive of the last 25 years to either convince faculty to pretty please publish in open access journals and pay the fees, or funders to mandate deposition in repositories or ‘gold’ open access publishing. However, despite the current success, this strategy of wining over faculty hasn’t been very effective: only a fraction of the current access is created by gold/green open access, much of it stems from sci-hub and sharing sites such as ResearchGate. In other words, as fantastic as full access to the literature that we now enjoy feels, it was brought about only to a small extent by the changed publication behavior of faculty.

Full access, without contentmining/TDM restrictions, with liberal re-use licensing and with clear version control can only come from scholars being able to control these properties. As long as we hand over such decisions to entities with orthogonal interests like current legacy publishers, these issues will not go away. What happens, when we keep outsourcing all of these decisions to entities with their own agenda? A recent editorial paints a bleak, 5-step future:

First, the authors published a formal research protocol in a peer-reviewed journal (F. Cramond et al. Scientometrics 108, 315–328; 2016). […]

Second, the authors posted the final draft paper describing their conclusions on a preprint server before submission (M. R. Macleod and the NPQIP Collaborative Group. Preprint at bioRxiv https://dx.doi.org/10.1101/187245; 2017).

Third and fourth, the group released the data-analysis plan and the analysis code before data collection was completed. These were registered on the Open Science Framework (https://osf.io/mqet6/#).

Fifth, the complete data set was publicly deposited on Figshare (https://dx.doi.org/10.6084/m9.figshare.5375275.v1; 2017).

Preprint on bioRxiv, (preregistered) paper in Scientometrics, data on figshare, code on OSF. If that is what counts as progress these days, I fear the future!

Having to jump through all of the hoops described in the editorial just to make every single step of the scientific process open by hand, sort of against the easy, natural way of doing it, reminds me of a hypothetical scenario, where a bunch of “220V activists” in a 110V country have been trying for 25 years to convince their friends to exclusively buy 220V appliances and add transformers to be able to do so. Now, the media starts to report a household where all appliances each have their own transformer as a sign of a new, modern 220V era.

To me, this scenario sounds like a 5-step nightmare! The 5-step dream our lab is working towards is more like

  1. Write code/design experiment
  2. Do experiments
  3. Analyze data
  4. Write/record narrative
  5. Click on ‘publish’ to make it all accessible at once in one place (if the bits weren’t already automatically accessible before the narrative).

Just as in the hypothetical 220V scenario, the rational solution cannot be to exacerbate the balkanization of our scholarly output by making each scholar place each of their output in a different container that is sealed from all the others. Are we really living in a world where we want 220V appliances but not 220V from our power outlets? The rational solution is to do the equivalent of  utilities providing 220V for all households: our infrastructure needs to provide the five steps of my dream out of the box, as a native function of how our institutions handle scholarly output.

We are currently abusing our libraries and librarians to prop up an unsustainable subscription infrastructure at a time point when nobody needs subscriptions any more. At the same time, some open access proponents keep pushing the old agenda, telling everybody that on top of payments for subscriptions we also need to pay for publishing in luxury journals. Some of us, having realized that approaching colleagues proved counter-productive, even move towards indoctrinating early career researchers to risk their jobs for the greater good (of installing 220V transformers?).

Not surprisingly, publishers are starting to position themselves for that new generation. Just like a smart utility would realize that selling ‘approved’ transformers while keeping everything else at the old 110V can only increase profits, legacy publishers like Elsevier have not only started buying startups that provide services for the entire research process, but also warned that this is going to make things much more expensive. Thus, if we keep doing what we have been doing, we run the considerable risk of making everything worse, and not only from a monetary point of view. So I agree with Elsevier and their vision: if we don’t stop them, we will not only pay ‘publishers’ much more than now, we will also suffer with regard to our ability to develop and re-use our scholarly output.

The analogous solution to convincing utilities to serve 220V is of course to convince infrastructure experts in libraries and computing centers to drop subscriptions, now that they have temporarily become superfluous anyway and use the saved money to buy an infrastructure that actually saves scholars time and effort, rather than causing them headaches and threatening scientific reliability. With such an infrastructure (which, at its core, essentially just requires a set of basic, evolvable standards), institutions can run bidding processes to create thriving markets with actual competition around our scholarly works, ensuring permanently low pricing.

Posted on September 27, 2017 at 13:20 4 Comments
Aug01

Come do research with us!

In: news • Tags: position, postdoc, work

At the end of this year, our amazing postdoc Axel is going back to Argentina to start his own lab. This means we are looking for a new postdoc to start next year. Earliest starting date is February 11, 2018.

As this position is funded by our university and not by a grant, there is no specific project the postdoc ought to work on. However, our lab uses invertebrates to study spontaneous behavior and how feedback modulates future behavior and the candidate would be interested to work in this field. There is also no specific end date for the position, but the longest theoretical duration of the contract would be 12 years. Actual duration of the contract is subject to negotiation.

Besides curiosity and the drive to work independently, candidates would apply with a combination of experiences and interests, such as some of these:

  • a PhD or equivalent
  • work in Drosophila, Aplysia or the leech (behavior and/or physiology)
  • quantitative analysis of behavior or neural recordings/imaging
  • open science
  • Neurogenetic circuit analysis
  • coding

Anyone with an interesting vision of what they would do with this position will be considered.

At the anticipated starting date, the candidate would join two graduate students (a third one to start next summer) a technician and me. Payscale would be standard German postdoctoral salary with all the usual benefits (health, unemployment,social security, etc.). As of right now, there is no specific deadline for applying as this is just an informal way to get the word out. I’ll be on vacation for much of this month and will screen applications once I return. If it seems like more candidates are needed for comparison, I will write a proper job ad. If not, I’ll let the candidates know the schedule for interviews in September.

Posted on August 1, 2017 at 18:28 1 Comment
Aug01

7 functionalities the scholarly literature should have

In: science politics • Tags: functionality, innovation, literature, scholarship

As a regular user of the scholarly literature since before the internet (I started reading primary scientific literature for a high-school project around 1989), I have closely followed its digitization. I find it rather frustrating that some of the most basic functionalities we have come to expect from virtually every digital object are still excluded from scholarly articles, making the literature much less useful than it could be. Some of these functionalities are more than 20 years old.

Nobody told Zuckerberg that they wanted Facebook. One would think that with profit margins sometimes exceeding 40% on billions in revenue, academic publishers would use some of that cash to provide their readers with at least a modicum of modern functionalities, without constant prodding. Alas, it appears as if these publishers have different priorities. They sue Sci-Hub, send take-down notices to Academia.edu for some papers, or they buy start-ups for reference management, preprint servers, laboratory notebook providers or companies providing altmetrics. It seems they don’t really know what to do with all this cash burning holes in their pockets! I have some long overdue features publishers could instead invest the money they stole from us in (no particular order):

1. Accessibility

Today, most human readers with an internet connection can, after an often awkward and cumbersome process that sometimes involves several search engines and a variety of other tools read every digital scholarly article. However, in the age of Big Data, mining the scholarly literature for content is something so fundamental that it boggles the mind how publishers can get away with simply blocking this kind of research arbitrarily.

2. Smooth peer-review

Peer-review, as any similar social endeavor, involves humans with differing opinions, approaches and social skills. Today’s version of formal peer-review of the scholarly literature has been around since around the 1950s. Given the time elapsed, one would think that the realization ought to have set in, that one shouldn’t needlessly compound a process already fraught with trials and tribulations with cumbersome and clunky technical procedures.  Today’s common procedures run the risk of amplifying the chances of misunderstandings and simultaneously exacerbating anxiety and misbehavior of the people involved.

Those of us old enough to be able to read and write at this time, have enjoyed web-based message boards (or online forums) since at least 1994. Online commenting and annotation on web-based word processors such as, e.g. Google Documents have been around since about 2005. Compared to the current practice of a single text review and single text replies by the authors, these ancient, in web terms, technologies appear almost like magic. How can publishers in 2017 get away with service from before 1994? In particular since this idea, entered by Koen Hufkens received an award way back in 2012?

3. Social components

Social media technology started around 1999 with the development of the “web 2.0”. However, our scholarly literature is still firmly stuck in web 1.0, despite the commendable efforts of some lone publishers to implement one of the earliest social functionalities, commenting. However, commenting is only one of many social technologies and one that may even better be implemented in a formal peer-review process at that. There is little reason we shouldn’t be able to form scholarly online communities which share common interests – after all, scholarly societies have been around for centuries. These communities could use social functionalities to share articles, but also to track recommendations, citations, downloads, etc. as we do with any other digital object. Obviously, such a functionality only makes sense as a built-in property of the literature and not of some duplicated space where some users share some of the literature, à la RG et al. Such basic functionality would let the reader know who in their communities are reading which articles and which colleagues are publishing which of their results.

Remember the “customers who have bought this book also bought this one” recommendation from Amazon? That was around 1994, 23 years ago – and still no sign of our scholarly literature implementing this ancient feature as a native component.

4. Web-based data visualizations

These days, in our private lives, we routinely zoom through all kinds of map-like data, either with pinching fingers or the scroll-wheel. We rotate all kinds of 3D objects or dynamically adjust the graphs displaying the usage statistics of our personal sites. However, for the scholarly literature, almost exclusively, the one visualization that counts is the pixel-based, flat image that only displays what the authors want their readers to see. While some journals are demanding their authors deposit their data with the publication of their article, this is not to ensure proper visualization of said data. Instead, it becomes a tedious, manual process by which authors may pay lip-service to data accessibility in principle, without any added benefit to scholarship other than the theoretical ability of a select few experts to potentially have a second look at the data (also that second look being cumbersome and manual). As if to add insult to injury, nobody seems to care about the software we write to transform the bits and bytes of the raw data into the flat, pixel-based images.

5. Hyperlinks

The first public demonstration of hyperlinks was also the first time a computer mouse was demonstrated. It was in 1968, in what is today called “the mother of all demos“. In the almost 50 years since then, we haven’t managed even to properly implement hyperlinks into the scholarly literature. For example, try and click on a very common sentence, e.g. “the experiments were performed as previously described”. In essentially every single case today, nothing happens, while in the demo in 1968, it would have taken the reader to a document describing the experiments.

If today’s reader is lucky, there is a reference behind the sentence that is clickable. However, in the majority of cases, it will just lead the reader to the place in the reference list where the full reference is listed. Whether this reference is clickable remains a hit or miss. In the affirmative case, however, the click will still not lead to a description of the experiments, but to a paper (or a paywall). If that paper is accessible, today’s reader may again be lucky and find the particular experiment buried somewhere in the Materials and Methods section, or, if less lucky, only some components with further references.

Imagine what would happen to an online store that would treat customers like that if they wanted more detailed descriptions of the merchandise. As if it wasn’t already clear before, this comparison should make it quite obvious what academic publishers think of their readers.

6. Semantic web technology

Coincidentally, proper hyperlinking of our literature with standard technology from about a decade ago, would also provide every article with an automatic list of citing articles (i.e., pingbacks/trackbacks) and allow deep citations (aka. anchors) to text, data and code. This would simultaneously allow us to implement a citation ontology to specify what kind of citation we are using, with myriads potential use cases, all benefiting scholarship.

Speaking of ontologies: semantic mark-up of all our scholarly articles would greatly benefit all kinds of scholarship, be it systematic reviews, content mining or simple literature searches, etc.

7. Single click submission

Once I click “publish” on this blog post, it is made public, without any further ado. Yet, despite institutional logins, ORCID, etc. the vast majority of publishers still require us to fill in forms about each author, copy-and-paste titles, abstracts and other information as if we just put a stack of paper sheets into an envelope, rather than a document which already contained all this information.

I’m sure there are more candidates for obvious functionalities every user of the scholarly literature would like to have, but I’ll leave it at that for now. At least no academic publisher can claim any more, they didn’t know what their users wanted.

Posted on August 1, 2017 at 16:52 1 Comment
Jul10

Looking for a PhD student

In: science

Our lab is looking for a PhD student interested in the molecular mechanisms of operant self-learning, a form of motor learning. The work will mainly revolve around the FoxP gene – the Drosophila orthologue of the language-associated gene FOXP2 in humans (note to self: need to update the Wikipedia entry with the fly data).

Specifically, the candidate will be involved in generating monoclonal antibodies for at least one, possible all three isoforms of the gene, they will use modern techniques to disrupt FoxP expression and function as well as to tag gene expression during development in in the adult and they will both use standard GAL4/UAS transgenes as well as genome editing (e.g., RMCE, CRISPR/Cas9) for these tasks. These molecular, developmental and anatomical data will be combined with the behavioral data from the second student in the team. We do have some basic primers for the research topic, as well as the entire proposal available for further reading.

Hence, we are looking for a team-oriented candidate with an inclination for molecular work in the fruit fly Drosophila. As is commonplace for Germany, this will be a three-year project, funded by a DFG 65% position, i.e., about 1,600€/month after tax and with full benefits, membership in our graduate schools and all the usual bells and whistles that comes with such a position in Germany. There are no lectures to attend or rotations to adhere to – just 100% of pure, unadulterated research fun. Therefore, we expect a Master’s degree or equivalent and at least some course experience in standard molecular cloning techniques, ideally, but not necessarily in Drosophila. Any experience in genome editing is a plus, but again, not necessary.

We perform all of our research in the open, meaning that we make all our data and code accessible wherever technically feasible. We are a small, international team of scientists and we would be more than happy to have you join us!

 

Posted on July 10, 2017 at 17:49 Comments Off on Looking for a PhD student
Apr18

Why I march

In: personal • Tags: democracy, diversity, march for science, no borders

There have been many discussions about the march for science, pro and con. Some of them have made me doubt the utility of the march, some have made me fear unintended consequences, again others seemed tangential and petty. In these past months, I have struggled to articulate my own reasons why I feel the urge to march for science. Today, I start to see two main reasons to march for science. I am still unsure if these are the right reasons and if they justify my presence or if they can be used against me. Be that as it may, they are what motivates me the most to take a stand this coming Saturday.

As a student, I studied a winter-term abroad, in northern Sweden. I spent my postdoctoral fellowship, almost four years, in Houston, Texas. I am now a professor of neurogenetics in Germany. Over the years, I have worked with colleagues from more countries than I can count. One thing that has become clear to me in this time is that difficult problems need to be tackled from all angles, if they are to be solved. To tackle problems from all angles requires a diversity of thought and creativity. Diversity of thought can only be maximized by a diversity of backgrounds. I therefore strive to make my lab as diverse as I possibly can. Science is both the child and the mother of diversity. Science is successful because of diversity. Any political confinement is hence necessarily detrimental to science.

Enthusiastic, creative individuals from all walks of life, countries, ethnicities, orientations and abilities are the most powerful resource science can tap into for progress. Conversely, all of humankind stands to benefit from science, which is why I have been active in the Open Science movement for over a decade now.

One reason I march for science in Munich on April 22, 2017 is because I firmly believe in open science serving an open society.

There is a second reason why I march for science. Not unlike other times in history, fringe movements have recently emerged which excel at using new media to manipulate a society not yet accustomed to these media. Then as now, one tool wielded by these movements is to spread uncertainty and doubt, lest their incompetence go unnoticed for as long as possible. Then as now, one target is the common reality derived by evidence and reason. Science is a formalized method of combining evidence and reason. Both in itself and through the scientific thinking which it emanates, science has now become a cornerpiece of a modern, pluralistic democracy. Any attack on evidence and reason can only be seen as an attack on science and the democratic society it serves. With recent direct verbal attacks on science and scientists, I believe it is time to stand up in defense of science. Science is worth defending not only for its own sake, but because I believe a society which does not support science and scientific thinking can never be a functioning democracy.

I hence march for science in Munich on April 22, 2017, because I see science as a diverse, global endeavor benefiting all humankind, but also because I fear scientific knowledge and scientific thinking as fundamentals of a modern democracy, may be in jeopardy.

Posted on April 18, 2017 at 12:55 Comments Off on Why I march
Mar31

How to convince faculty to support subscription cancellations

In: science politics • Tags: cancellations, libraries, subscriptions

There have been repeated online discussions about my suggestion to libraries that now would be an excellent time to start cancelling subscriptions. Prime counter-argument is that librarians risked their jobs or at least face faculty backlash if they did that. Personally, I have witnessed many such cancellations and there has never been a riot or even a librarian reprimanded, let alone fired. Not even when a library once had to cut 50% of all subscriptions. In fact, there is now a growing list of reports of painless big deal subscription cancellations. Faculty understand, there are limits to budgets. Faculty are resourceful in finding the literature, even before we had so many new tools at our disposal.

However, things are different in different countries and different institutions and with different faculty. My experience may not be representative. In either case, it doesn’t hurt to have faculty on your side. In fact, I strongly advocate much more close collaborations between faculty and their librarians than we have now. At this point in time I consider librarians our closest allies and most important institutional partners. After all, who else would be more qualified, predestined, even, to help us implement a modern infrastructure?

I have outlined before, why librarians are in an excellent position right now to take the next step: the hands of faculty are pretty much tied at this point. Moreover, that goes without saying, librarians are the most competent people in this matter (maybe together with the few computer science faculty who actually work in this field).

Here’s a short, non-exhaustive list of arguments I think ought to be very convincing for all but perhaps the most Luddite faculty,  in defense of a budget shift from subscriptions to in-house infrastructure. Of course, one would preface such a list with a short explanation as to what is being argued:

“Dear faculty member,

as you may have heard in the news, our institution has joined a global initiative of hundreds of other scholarly institutions which strives to modernize our scholarly infrastructure. Our infrastructure has not undergone extensive modernization since its inception in the early 1990s, so the modernization is long overdue. One part of the modernization entails moving subscription funds over to infrastructure funds. Above and beyond the technical limitations of subscription literature (with which you are likely more than familiar), there are many other reasons why subscriptions are among the worst technologies to subsidize with tax funds. Here are some of the reasons why we now have to cancel subscriptions and how you will directly benefit from the consequences of these cancellations:

  • subscription funds go to corporations that waste >90% of the public moneys spent on them. Only their shareholders benefit
  • you have likely endured many cancellations in the past that came without any added benefit to you, beyond saving your institution money. This time, we will use the saved money to implement services that will benefit you directly: they will minimize your tedious work with writing, reading, data management and code, so you can focus on your research even more. Stay tuned, these services will be presented shortly.
  • once we have successfully transitioned, there will likely be funds left over, which will flow right back into your research budget
  • oh, and if you use our shiny new tools, you won’t even notice that we’re canceling subscriptions, as these tools will fetch (almost) all your literature for you
  • please pardon the dust while we remodel
  • for a more exhaustive list of benefits, please see [list of benefits]
  • please feel free to contact us at any time in case you feel your personal needs have not been addressed satisfactorily”

Of course, one would formulate these arguments a little more professionally than I have done here, but I wanted to convey the gist of where the thrust of the argument might go.

Posted on March 31, 2017 at 11:00 3 Comments
Mar22

Please address these concerns before we can accept your #OA proposal

In: science politics • Tags: flipping, infrastructure, publishing

Below, I’ve taken the liberty to “peer-review” recent proposals to ‘flip’ subscription journals to open access

The applicants have provided an interesting  proposal of how to ‘flip’ the current subscription journals to an article processing charges (APC)-based ‘gold’ open access (OA) model. The authors propose to transition library subscription funds to reimburse author-paid APCs. This should be done by each institution first analyzing their current subscription portfolio and then introducing open access (OA) funds to cover article expenses instead. Importantly, maximum APC limits (“caps”) are suggested as a central measure to limit future APC increases. The proposal, while interesting and potentially ground-breaking, cannot be accepted in its current form. In particular, it seems the authors have overlooked an alternative route, which is cheaper and nevertheless holds several additional benefits not provided by the current proposal. In fact, there is a risk that the current proposal may come with some untended consequences that could deteriorate the already sub-optimal status quo. Before I can recommend accepting this proposal, the authors at least need to address the following major concerns (no particular order):

  1. Given that high-ranking journals publish the least reliable science, how does maintaining this pernicious hierarchy address the replication crisis? How is more public access to less reliable science benefiting the public?
  2. Why, in 2020, would anyone even think 17th century “journals” are even a technologically useful concept worth spending billions every year to subsidize? Why, in 2020, would selection and publishing still be bundled up as if to pretend printing presses were still a thing?
  3. As scholars are already evaluated according to the amount of tax funds (grants) they spend on experiments – with more (not less!) tax funds spent indicating a more competitive scientist – would it not be inconsequential if they were not also evaluated on the amount of tax funds they spent on publications? Especially if APCs scale with prestige, such that more money being spent indicated more prestige? What would keep evaluation committees from treating APC $$$ analogously to grant $$$? Why should the current impact factor not simply be replaced by the APC amount paid?
  4. If the prestige universities sell is analogous to the prestige journals sell, has competition among universities significantly reduced tuition fees? If tuition has not decreased recently, why is the prestige universities sell not analogous to the prestige journals sell?
  5. With the vast majority of all OA journals charging no APC at all competing with the few that do charge, APCs are nevertheless increasing at above inflation rates (up to 70% year-over-year). Additionally, some publishers already today charge APCs above the current average subscription price. In the light of this evidence, how can more journals that charge APCs be more of a competition than those that do not charge at all?
  6. If institutions choose/have to cap the amount of funds they reimburse their members for APCs, how does this not amount to a disadvantage of the scholarly poor? If they do not cap the amount, how can APC increases be controlled? In other words, why should a high-ranking journal not be able to charge an APC of US$50k for a paper that virtually guarantees tenure or funding, when Harvard can charge US$250k for a degree that virtually guarantees employment?
  7. Given that even the most optimistic estimates only project a meager 20-30% short-term cost reduction without being able to guarantee sustained reductions in the future (indeed, there is evidence already now, when most OA journals do not charge any APC at all, of above-inflation APC increases!), why would one not instead champion a scenario with an already existing business model and market, where sustained reductions on the order of 90% were virtually guaranteed? (I am referring to the alternative route: migrating to a service-based market, with companies like Scholastica or Ubiquity – if their services became too expensive, institutions are free to choose to switch or run the infrastructure themselves)
  8. Why would anyone want to pay billions in excess of actual costs merely to protect the balkanization of our literature into >30k journals, effectively preventing content mining and hampering the implementation of an efficient recommendation system for our literature?
  9. With the abysmal track records of publishers first hogging our copyright and then putting APC-enabled articles back behind paywalls, why should we continue to trust these for-profit entities with our most valuable assets, our literature, instead of merely allowing them to compete with other companies for being allowed to polish our crown jewels?
  10. How will flipping to a different business model address our need for a sustainable infrastructure covering our data and code, at the same time as it is addressing this (non-exhaustive) list of missing/lacking functionalities in our scholarly literature?
    • Link-rot
    • No scientific impact analysis
    • Lousy peer-review
    • No global search
    • No functional hyperlinks
    • Useless data visualization
    • No submission standards
    • (Almost) no statistics
    • No content-mining
    • No effective way to sort, filter and discover
    • No semantic enrichment
    • No networking feature

The point of these questions is, of course, that all of them would be addressed if libraries and institutions kept our publications in-house, according to modern infrastructure standards, with a flourishing service market to constantly develop and improve their functionality and usefulness for the scientific community and the public. Initiatives spearheading such a business model (still in the deprecated, legacy journal format, though) are the Open Library of the Humanities, lingOA, mathOA, psyOA and other nascent fairOA organizations. The authors should discuss why their more expensive option is so superior to the less expensive alternative route that does address every one of the points made above, in particular taking into account that public institutions are commonly required to choose the lowest responsible bidder.

Failing to adequately address these reviewer concerns should result in a rejection of the proposal.

UPDATE: and as if right on cue, a leaked document evinces how publishers will try and subvert any ‘flipping’ negotiations.

Posted on March 22, 2017 at 13:04 2 Comments
Mar07

Are we inadvertently supporting the defunding of public science?

In: science politics • Tags: academia, journals, public science, publishing

There can be little doubt that the defunding of public academic institutions is a main staple of populist movements today. Whether it is Trump’s budget director directly asking if one really needs publicly funded science at all, or the planned defunding of the endowments of arts and humanities or the initiatives to completely abolish the EPA and other science agencies. Also across the pond, there are plenty of populist parties and other Trump fans who certainly also strive to mimic their idol and rid their countries of intellectuals who may see through their shenanigans, or use evidence to oppose their policies.

For decades, these anti-intellectual forces have been fighting science tooth and nail around the globe. Recently, in the UK, a website, whose former boss is now the main advisor to Trump in the White House, titled: “When you hear a scientist talk about ‘peer-review’, you should reach for your Browning” (link to archive.org snapshot). The author argues that the scholarly literature we refer to when making scientific claims was written by “charlatans and chancers”.

The horrifying aspect of this article is that from their perspective, this isn’t even ‘fake news’. Already in 2005, John Ioannidis found that “most published research findings are false“. Since then, numerous reports have been published which seem to suggest that our scholarly literature is much less reliable than one would desire. While the scope of irreproducibility is not yet clear, the overwhelming majority of scientists believes the problems are bad enough to justify the term ‘replication crisis’:

Besides criminal intent and inadequate training, one major factor in the potential unreliability of our scholarly literature is socioeconomic in nature. The two factors deemed to contribute most to irreproducible research were selective reporting and the pressure to publish:

In an attempt to provide accountability and to combat biases in scholarship, we have introduced quantitative measures to assess individual scholars. Most widely used in hiring, promoting and funding scholars are two general aspects: quality and productivity of scholars. To quantify quality, individuals are commonly assessed by where they publish, for productivity, how much they publish. Both of these quantifications have been shown to be highly problematic, especially in today’s hyper-competitive scholarly environment.

In the fields where scholars publish their experiments in journals, it has been shown that the most prestigious journals publish the least reliable science. So by preferentially hiring, promoting and funding scientists who publish in these journals, we are also rewarding unreliability, to some extent. Similarly, if we count the number of journal articles with novel findings, we are selecting for underpowered studies with erroneous conclusions.

Taken together, these data suggest that even if the reliability of our current literature turns out to be higher than the initial reports suggest, we are currently running a system poised to make it less reliable every year. Consequently, inasmuch as anti-science forces seize on the unreliability of the scholarly literature to support their defunding plans, the horrifying possibility emerges, that every stakeholder – academic, librarian or publisher – who, willingly or not, props up the current journal system, may be inadvertently supporting anti-intellectual agendas.

Posted on March 7, 2017 at 10:44 Comments Off on Are we inadvertently supporting the defunding of public science?
Feb09

Data structures for Open Science

In: own data • Tags: data structure, metadata, open data, open science

For the last few years, we have been working on the development of new Drosophila flight simulators. Now, finally, we are reaching a stage where we are starting to think about how to store the data we’ll be capturing both with Open Science in mind, but particularly keeping in mind that this will likely be the final major overhaul of this kind of data until I retire in 20 years. The plan is to have about 3-5 such machines here and potentially others in other labs, if these machines remain as ‘popular’ as they have been over the last almost 60 years. So I really want to get it right this time (if there is such a thing as ‘right’ in this question).

Such an experiment essentially captures time series with around 70-120k data points per session, where about 3-6 variables are stored, i.e., a total of at most ~500-800k table cells per session, each with 8-12bit resolution. There will be at most about 8-16 such sessions per day and machine, so we’re really talking small/tiny data here.

Historically (i.e., from the early 1990s on), these data were saved in a custom, compressed format (they needed to fit on floppy disks) with separate meta-data and data files. We kept this concept of separated meta-data from data also in other, more modern set-ups such as our Buridan experiments. For these experiments, we use XML for the meta-data files (example data). One of our experiments also uses data files where the meta-data are contained as a header at the beginning of the file with the actual time-series data below (example data). That of course makes for easy understanding of the data and makes sure the meta-data are never separated from the raw data, i.e., less potential for mistakes. In another, newer, experiment we are following some of the standards from the Data Documentation Initiative (no example data, yet).

With all of these different approaches over the last two decades, I thought I ought to get myself updated on by now surely generally agreed on conventions for data structure, meta-data vocabularies, naming conventions, etc. I started looking around and got the impression that the different approaches we have used over time are still being used and then some new ones, of course. I then asked on Twitter and the varying responses confirmed my impression that there isn’t really a “best-practice” kind of rule.

Given that there was quite a lively discussion on Twitter, I’m hoping to continue this discussion here, with maybe an outcome that can serve as an example use case someday.

What do we want to use these data for?

Each recording session will be one animal experiment with different phases (“periods”) during the experiment, for instance some “training” sessions and some “test” sessions with experimental conditions differing between training and test. The data will be saved as time series data continuously throughout the experiment, so the minimal data would be a timestamp, the behavior of the animal and a variable (stimulus) that the animal is controlling with its behavior. Thus, in the simplest case, three columns of integers.

The meta-data for each experiment has to contain a description of the columns, of course, as well as date and time at the start of the experiment, genotype of the animal, text description of the experiment, DOI of the code used to generate the data, sequence and duration of periods, temperature, and other variables to be recorded or set on a per session or per period level.

A dataset or small project can consist of maybe three to four groups of experiments, let’s say one experimental genotype and two control groups. Traditionally the way we handled this grouping in most of our experiments, is to keep a text file in which the experimenter lists which file belongs to which group. That way, anybody can read the text file and get an understanding of the experimental design. The file also contains comments and notes about user observations during the experiment and a text description of the project. In a way, this text file is like a meta-data file for a data-set, rather than an individual experiment and thus should probably also contain some minimal mark-up. This text file is then read by either custom software or an R script to compile summary data for each group, e.g. means and standard errors of some variables we extract on a per period basis, plotted and compared between groups. As there are numerous ways to evaluate an animal’s behavior if we have the full time series, there is any number of different parameters one can want to extract from the data and plot/compare.

This is where the open science part would come in. Whenever the user runs the script that evaluates, plots and compares the data, the entire dataset is automatically made publicly accessible. Along with the dataset (raw data, meta-data and grouping text file), all the evaluations should also be deposited. Currently, we do this as a PDF file, but that is all but useless – only for human use. Ideally, I’d like this evaluation file to contain all the content of the grouping text file, as well as the DOI of the script that generated it and (semantic?) markup that structures the evaluation document. Such an evaluation document would be both machine and human (with a reader, which is why we started by using the PDF format) readable and provide an overview of exactly what was done to what data.

One eventual goal is to also use these evaluation documents during manuscript authoring. Instead of copying the figures, pasting them into a manuscript and then trying to describe the statistics, I’d like to just link the different evaluations from inside the manuscript. Each figure in a manuscript would then just be a link to one of the evaluations in the evaluation document, the one I want readers to see so they can follow my line of arguments. Any reader who wants to see other aspects of the data has single-click access to the entire evaluation document, with all our evaluations for this data-set, as well as access to all the code used to generate and evaluate the data, if they so wish. For this, all the data and meta-data in each dataset has to be linked to both each other, and the code and the text. Of course, all the data in a manuscript should also be linked together, even though they likely come from different datasets/projects.

With the data and code solutions we’re currently developing, this should allow us to just write code, collect data and link both into our manuscripts. Everything else (data management, DOI assignment, data deposition, etc.) would be completely automatic. Starting at the undergraduate student level, users would simply have to follow one protocol for their experiments and have all their lab-notebooks essentially written and published for them – they’d have a collection of these evaluation documents, ready to either be used by their supervisor, or to be linked in a thesis or manuscript.

So, what would be the best data structure and meta-data format with these goals in mind?

Posted on February 9, 2017 at 23:50 17 Comments
  • Page 8 of 22
  • « First
  • «
  • 6
  • 7
  • 8
  • 9
  • 10
  • »
  • Last »

Linking back to brembs.net






My lab:
lab.png
  • Popular
  • Comments
  • Latest
  • Today Week Month All
  • Elsevier now officially a "predatory" publisher (24,239 views)
  • Sci-Hub as necessary, effective civil disobedience (23,077 views)
  • Even without retractions, 'top' journals publish the least reliable science (15,546 views)
  • Booming university administrations (12,930 views)
  • What should a modern scientific infrastructure look like? (11,495 views)
  • After decades of debating the “scientific publishing crisis”, the time has come to decide.
  • Retraction data are still useless – almost
  • Procurement Before Prestige
  • Motor learning mechanisms at #SfN25
  • Edgewise
  • Today Week Month All
  • Booming university administrations
  • Even without retractions, 'top' journals publish the least reliable science
  • What should a modern scientific infrastructure look like?
  • Science Magazine rejects data, publishes anecdote
  • Recursive fury: Resigning from Frontiers
Ajax spinner

Networking

Brembs on MastodoORCID GScholar GitHub researchgate

View Bjoern Brembs

Spontaneous activity in the isolated leech nervous system
Spontaneous activity in the isolated leech nervous system

Video von YouTube laden. Dabei können personenbezogene Daten an Drittanbieter übermittelt werden. Hinweise zum Datenschutz

login

  • Register
  • Recover password

Creative Commons License bjoern.brembs.blog by Björn Brembs is licensed under a Creative Commons Attribution 3.0 Unported License. | theme modified from Easel | Subscribe: RSS | Back to Top ↑

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin

bjoern.brembs.blog
Proudly powered by WordPress Theme: brembs (modified from Easel).