What is holding us back?

What is holding us back?
SpotOn London name tags. Flickr photo by keatl.

Last Friday and Saturday the 6th SpotOn London conference tool place at the British Library. I had a great time with many interesting sessions and good conversations both in and between sessions. But I might be biased, since I helped organize the event, and in particular did help put the sessions for the Tools strand together.

The following blog post summarizes some of my thoughts before, during and after the conference, and I want to focus on innovation in scholarly publishing, or rather: what is holding us back?

Reason #1

The #solo13alt session on Saturday looked at the role of altmetrics in the evaluation of scientific research. I was one of the panelists and had summarized my ideas prior to the session in a blog post written together with Jennifer Lin. It was an interesting session, although a bit too controversial for my taste. But it became obvious to me in this and a few other sessions that other obsession with quantitative assessment of science is increasingly dangerous. Other people have said this more eloquently:

  • The mania for measurement - Stephen Curry in the #solo13alt session
  • Why research assessment is out of control - Peter Scott
  • Universities are becoming metrics factories, driven by large corporates - Peter Murray-Rust
  • The ‘real’ revolution in science will come when the scientific egosystem gets rid of the credit-imperative - Jan Velterop
  • Excellence by Nonsense: The Competition for Publications in Modern Science - Mathias Binswanger

My job title is Technical Lead Article-Level Metrics so it might sound surprising that I say this. But we have to differentiate of what we do now and in the next few years - which is mainly to get away from the Journal Impact Factor to more reasonable metrics that look at individual articles and include other metrics besides citations - to where we want to be in 10 or more years. And for the latter it is essential that journal articles and other research outputs are valued for the research they contain, rather than serving as a currency for merit that can be exchanged into grants and acadmic advancement. This is a very difficult problem to solve and I have no answers yet. Going back to how science was conducted until about 50 years ago - as a small elite club that worked based on closed personal networks - is definitely not the answer.

Reason #2

In his keynote Salvatore Mele from CERN explained to us that Open Access in High Energy Phsics is 50 years old, and that the culture of sharing preprints preceeded the ArXiv e-prints service - scientists were mailing their manuscripts to each other at least 20 years before ArXiV launched in 1991. A similar culture doesn’t exist in the life sciences and therefore the preprint services for biologists launched this year (e.g. PeerJ Preprints and bioRxiv) will have a hard time gaining traction.

Email is one of those services that every researcher uses, and we should think much more about how we can create innovative services around email rather than only considering new tools and services that are still used only by early adopters. AJ Cann had coordinated a workshop around email at SpotOn London that he called the dark art of dark social: email, the antisocial medium that will not die. I am still puzzled why most researchers prefer to receive tables of content by email rather than as a RSS feed, but we shouldn’t confuse what we get excited about as software developers and early adopters of online tools with what the mainstream scientist would be likely to use.

Another good example is data sharing, a topic that was discussed in at least three SpotOn sessions. Even though most attendees at SpotOn London agreed that sharing of research data is important, it is obvious that this is currently not common practice in most scientific disciplines. Funders have created data sharing policies (e.g. NSF or the Wellcome Trust), as have publishers, and many organizations are thinking about incentives for data sharing, including data journals such as Scientific Data that will launch in 2014 and was presented by Ruth Wilson in the motivations for data sharing session. Even though incentives can help promote changes, I am pessimistic that something as central to the conduct of science as data sharing can be changed without more scientists being intrinsically motivated to do so. This is a much slower process that should start as early as possible during training, as pointed out by Kaitlin Thaney in the #solo13carrot session.

Reason #3

In terms of the technology that is holding us back, I increasingly think that publisher manuscript submission systems may be the single most important place that is slowing down innovation. I participated in the first Beyond the PDF workshop in 2011, and I think now that Beyond the MTS (or manuscript tracking system) might have been a better motto than Beyond the PDF, as many of the problems we discussed relate to typical editorial workflows we use today. These systems need to implement many of the ideas discussed at SpotOn London and other places, from opening up peer review (#solo13peer) to making it easier to integrate research data into manuscripts (#solo13carrot) and to ideas of how the scientific record should like in the digital age (#solo13digital). In the latter panel we discussed both new authoring tools such as WriteLaTeX, and new ideas of what a research object should look like and how the different parts are linked to each other. A major theme here was reproducibility highlighted both by Carol Goble (also see her ISMB/ECCB 2013 Keynote) and Peter Kraker (see also his Open Knowledge Foundation blog post).

The problem with today’s manuscript submission systems is that they have grown so big and complex that any change is slow and cumbersome, rather than iterative and part of an ongoing dialogue. I don’t want to blame any single vendor of these systems, but rather suggest that we carefully re-evaluate the workflow from the manuscript written by one or more authors to the accepted manuscript. My personal interest is mainly in authoring tools, and I have recently written about and experimented with Markdown. This process of re-evaluating manuscript tracking systems is not simply about technology, but is rather about how we approach this problem as author, publisher, tool vendor and as a community.

Copyright © 2013 Martin Fenner. Distributed under the terms of the Creative Commons Attribution 4.0 License.