Altmetrics track the impact of scholarly works in the social web. Article-Level Metrics focuses on articles, but also looks at traditional citations and usage statistics. The PLOS Article-Level Metrics project was started in 2008. The altmetrics manifesto was published in October 2010 and described the fundamental ideas. By October 2011 we had a number of altmetrics tools, fueled by the Mendeley/PLOS API programming contest. In 2012 the focus shifted from the fact that we can provide these numbers to a discussion of the many open questions. We could see this at the altmetrics12 conference in June, and even more so at the altmetrics workshop hosted by PLOS last week in San Francisco.
Altmetrics can provide a large amount of information about the post-publication activity around an article (and other scholarly content), and this is exciting, but at the same time also somewhat overwhelming and scary. Some of the things that we as a community have to figure out include standards for collecting, aggregating and displaying altmetrics data, strategies to combat attempts to game these metrics, and finding appropriate ways for the different organizations providing altmetrics to work together as a community. These and other topics were discussed in great detail at the PLOS altmetrics workshop, and we made excellent progress not least thanks to the excellent moderation by Cameron Neylon. The third day of the workshop was a hackathon, and we were able to translate some of the ideas into prototypes of new tools.
The most important conclusion from the workshop for me personally was that we should really should focus on use cases. Altmetrics should help answer questions that we can’t answer today, and despite the promise, the various altmetrics tools still have a log way to go. A case in point is the promise that altmetrics can make it easier to find relevant scholarly content. We all use social media to help us find papers and other stuff, but integration of altmetrics into the traditional scholarly search tools is still missing. ReRank is a cool prototype developed during the hackathon last Saturday, but we are still a long way from having altmetrics feeding directly into the relevance sorting of search results.
With these thoughts in the back of mind, I look forward to the altmetrics session at the SpotOn London conference this Sunday afternoon. Sarah Venis from Médecins sans Frontières (MSF) will talk about the questions that she hopes altmetrics can answer for her organization. MSF is very interested to look beyond citations for the impact of their publications, as their primary target audience is not really the scholarly community, but rather people in need in various parts of the world. Marie Boran from the Digital Research Enterprise Institute (DERI) is interested in using altmetrics as a recommendation tool to find researchers with similar interests. Euan Adie from altmetric.com and I (technical lead for the PLOS Article-Level Metrics project) will use our respective tools to try to answer some of these questions. For me altmetrics are primarily tools to tell a good story, and that is one reason why we picked the title Altmetrics beyond the Numbers for this session. The focus of the session will then shift to an open discussion, and I hope we can get some good answers to this and other questions.
A clear focus on use cases should go a long way to reduce that feeling of being overwhelmed by all the numbers that altmetrics can provide. If we have specific goals for which we need altmetrics, it becomes much easier to decide what numbers work best for us, what standards we need and whom to ask to collect this information. AJ Cann and Brian Kelly have written two excellent blog post about the confusion that too many altmetrics numbers can create, and the workshop Assessing Social Media Impact during SpotOn London addresses some of these questions. Hackathons have played an important role in the history of altmetrics. I invite you to come to the SpotOn London hackathon this Saturday if you have some cool ideas and want to get started with the help of others.
Please let me know if you see other reports of the workshop that I have missed.
What Flavor is Scholarly Markdown?
One important outcome of the recent Markdown for Science workshop was an overall agreement that all the different implementations (or flavors) of markdown that currently exist are a big problem for the adoption of Scholarly Markdown and that we need:A ...