The British Medical Journal this week published Open access publishing, article downloads, and citations: randomised controlled trial by Phlip Davis et al. There are already several publications that looked at the full paper downloads and citations of open access papers compared to closed access papers. But this is the first prospectively randomized study, thus avoiding some of the typical problems of retrospective evaluations (e.g. that important papers are more likely to have free access).
Articles published in 11 American Physiological Society journals between January and April 2007 were randomly (1 out of 7) made open access at the time of publication. Full text viewing, PDF downloads and citations were measured over a 12 month period. The authors found a significant increase in readership (full text HTML and PDF) but no difference in citation rates.
There are plausible arguments why citation rates weren't different between the two groups. Most researchers that cite APS papers in their own publications will have institutional access to these journals. But citation rates of open access papers are a political topic, that's why we already have a number of reactions from the blogosphere (e.g. from The Scholarly Kitchen, Stevan Harnard and Gunther Eysenbach). There are also some direct responses to the paper at the BMJ website. The main criticism of the paper is the short time of 12 months to look at the citation rate. The citations will increase in the next few years, but because all papers from APS journals are made available as full text after 12 months, there will be no longer a difference in access to the two groups of papers.
What surprised me post about the paper is the journal. I would have expected that it would appear in an APS journal, but a medical journal? The editorial explains the reasoning behind it. Like many other journals, the BMJ is trying out new access models. All research papers (but not the other content, including the editorial) in the BMJ are open access. And as Fiona Godlee in the editorial puts it:
Academic publishing is going through interesting times. We don't know which model will prevail, or indeed whether there will ultimately be one or several coexisting models.
Differences between ORCID and DataCite Metadata
One of the first tasks for DataCite in the European Commission-funded THOR project, which started in June, was to contribute to a comparison of the ORCID and DataCite metadata standards. Together with ORCID, CERN, the British Library and Dryad we looked at how contributors, ...
Explaining the DataCite/ORCID Auto-update
This Monday ORCID, CrossRef and DataCite announced (ORCID post, CrossRef post, DataCite post) the new auto-update service that automatically pushes metadata to ORCID when an ORCID identifier is found in newly registered DOI names.This is the first joint announcement by the three organizations, ...