Do online journals narrow science and scholarship?

James Evans, a sociologist from the University of Chicago, reports his research on the kind and frequency of citations over the last 60 years in the latest issue of Science. He found a change in citation behavior as more and more journals became electronically available: fewer journals and articles were cited and the cited articles were more recent.

These findings seem to contradict our expectations (and research by other groups). The greater availability of research papers in recent years thanks to electronic publication (and open access) should broaden and not narrow the papers that we read and ultimately cite in our own publications. But looking at my own behavior when reading papers or writing a publication, and thinking about many discussions we had on related topics, these findings make perfect sense.

Today's technology allows us to make the distribution of scientific papers in electronic form very efficient, and thanks to this technology we have new business models (author-pays) and an ever-increasing number of journals. Access to research articles is now easier, cheaper and for a broader audience than in ever was before. This is of course a wonderful development, but unfortunately creates a new problem: information overflow and how to filter out the relevant information.

Twenty years ago the typical researcher would use the personal or institutional journal subscription to regularly follow the important papers in his field. Index Medicus and Current Contents were used to find additional articles, but they were cumbersome to use. Today few researchers regularly read printed journals. Most papers are found by searches of online databases and by subscriptions of tables of content by email or RSS. There are many clever tools to facilitate this, but most people probably are overwhelmed by the information and stick to some very specific research interests and high-profile journals.

This is where the filtering of information becomes critical. Technology can help a great deal in finding the most relevant research papers, but I would argue that human intervention is still far more important. For most people including myself peer review is the first step in that filtering process. Connected to peer review is the editorial decision that something is not only scientifically sound but also interesting. This editorial decision is sometimes debatable but is a very effective filtering process. Post-publication filtering by human intervention in the form of comments, voting or paid services (e.g. Faculty of 1000 Biology) is still in its infancy.

I am hoping for better filtering tools in the future, both pre- and post-publication. I'm confident that technology can be a big help (especially when full-text searching takes off), but will never replace human editing. Until then, maybe we should kep at least some important print subscriptions so that we don't miss that fascinating research paper that for some reason wasn't picked up by that fancy electronic tool.

David Crotty (in his highly recommended blog Bench Marks) also blogged about this topic. Philip Davis also wrote about the Science article on the scholarly kitchen blog. And Thomas Lemberger blogged about the article on The Seven Stones.

Copyright © 2008 Martin Fenner. Distributed under the terms of the Creative Commons Attribution 4.0 License.