Last weekend was BibCamp Hannover, a “BarCamp for librarians and other hackers”. If you understand German, you can read about the sessions, discussions and people in the Blog, Wiki, and FriendFeed Room. And Steffi Suhr wrote a nice post about The most beautiful library in the world in her Nature Network blog.
The BarCamp format worked very well for us – we used traditional pen and paper for session planning:
I had suggested a session about Institutional Bibliographies. We combined this with two related suggestions and had a very interesting session about the communication of scientists with librarians. One interesting theme in the discussion was the notion that the scientific workflow seems to be broken at some specific steps, for example:
Most journals allow authors to make the final version of an accepted manuscript (i.e. after peer review) publicly available via an institutional repository. Most librarians would be happy to help their authors with this step, but unfortunately have no good tools to track the papers published at their institution, and never see the final version of accepted manuscript (many journals don't allow publisher-generated PDFs in repositories).
Rejected manuscripts are usually resubmitted to a different journal (most manuscripts will eventually be published somewhere). Unfortunately the next journal will most likely use a different manuscript format, different citation styles and a different manuscript submission system. Some journals provide a manuscript transfer service, but the comments made in the peer review are usually not available to the editors and reviewers of the next journal.
This is a topic that I have written about before (ORCID or how to build a unique identifier for scientists in 10 easy steps). Until we have unique author identifiers, it is difficult or impossible to reliably find the papers published by a particular person (a good number of papers by Fenner M in PubMed are not written by me).
Another topic I have written about before (Recipe: Distributing papers for a journal club). The problem is not only that email is really bad for sending PDF files to a group of people, but that most journals don't allow redistribution of their content, even within an institution with an institutional subscription.
The number of times a paper is cited is often used as a proxy to the importance of the science in that paper. Citation counts (e.g. in the form of Impact Factor or H-Index) are often used to evaluate researchers. There are many problems with this approach, because citation counts are influenced by many other factors (e.g. time, popularity of the subject area, self-citations). But the biggest problem is the fact that there is no general agreement on how to count citations, and no database that makes this information freely available.
It might make sense to make a list of these broken steps, and then estimate the effort that would be required to fix each of them. Some broken steps are more important, and some fixes easier than others, so this exercise would give a good list of action points.
Dynamic Data Citation Webinar
On July 12, 2016, DataCite invited Andreas Rauber to present the recommendations for dynamic data citation of the RDA Data Citation Working Group in a webinar.Dynamic dataAndreas is one of the co-chairs of the RDA working group, ...
Persistent Identifiers: Enabling Services for Data Intensive Research
Yesterday DataCite and ePIC co-hosted the workshop Persistent Identifiers: Enabling Services for Data Intensive Research. Below is a short summary of the tweets, all using the hashtag #pid_paris.Emerging theme: the number of PID solutions is overwhelming to researchers. ...
Build Roads not Stagecoaches
I attended the Open Knowledge Festival this week and I had a blast. For three days (I also attended the fringe event csv,conf on Tuesday) I listed to wonderful presentations and was involved in great discussions - both within sessions, ...