In the last issue of Nature, a news feature and research highlight look at two recent high-profile paper retractions. The two papers by biochemist Homme Hellinga delt with rational enzyme design. A second group couldn't reproduce the results, ultimately leading to the paper retractions. Then a third group was able to demonstrate that rational enzyme design is indeed possible.
The research highlight looks at the troubles of the second reseach group led by John Richard. He wasted a lot of time and money trying to reproduce Hellinga's findings and in the end did not gain anything.
Non-reproducible work is a common problem in research, and papers containing this questionable work are rarely retracted. I would guess that most of the time this is unintentional. John P. A. Ioannidis explains this in an PLoS Medicine essay: Why most published research findings are false.
Sometimes the reasons behind non-reproducible results can be calculated, and this includes drug trials in clinical medicine. Statistical Power of Negative Randomized Controlled Trials Presented at American Society for Clinical Oncology Annual Meetings found that more than half of these randomized controlled trials that showed no benefit for a new treatment did not have enough patients to detect even a medium-sized treatment effect.
What should we do about this? The first step is to accept the fact that a significant number of research findings you read in a paper are not reproducible. We have to be careful to start a PhD thesis or other research project based on a few exciting papers, especially when this work was done by someone else. Thinking about this, I should have taken that advice myself before starting a particular project 5 years ago.
DOI Registrations for Software
We know that software is important in research, and some of us in the scholarly communications community, for example, in FORCE11, have been pushing the concept of software citation as a method to allow software developers and maintainers to get academic ...
In 1998 Tim Berners-Lee coined the term cool URIs (1998), that is URIs that don’t change. We know that URLs referenced in the scholarly literature are often not cool, leading to link rot (Klein et al., 2014) and making it hard or impossible to find the referenced resource.Cool URIs are, ...
It's all about Relations
In a guest post two weeks ago Elizabeth Hull explained that only 6% of Dryad datasets associated with a journal article are found in the reference list of that article, data she also presented at the IDCC conference in February (Mayo, Hull, & Vision, ...