Nature today published a report on the prevalence of duplicate papers in Medline. In this report Mounir Errami and Harold Garner estimate that there as many as 200.000 duplicate papers in Medline or 1% of all published papers.
The original paper by Errami and Garner was published in Bioinformatics. They used the search engine eTBLAST to find duplicate papers and deposited the results in a database called Deja Vu. When you search Deja Vu for scientists you know, you find scary results.
Nobody likes duplicate papers, but it's just another trick to improve your publication record. Another popular trick is the inflation of paper authors. But the basic trick is still the heavy use of the least publishable unit or LPU.
Both authors and journals are inclined to publish as many papers as possible. So what will change these practices? If the quality of the work of a scientist isn't simply measured by numerical indices such as number of publications. So it's up to those that decide about grants and jobs to find a better way to pick up the best scientists.
Corie Lok has started a discussion about this topic in the Publishing in the New Millenium Forum, please post your comments there.
The DataCite Technology Stack
DataCite is a DOI registration agency that enables the registration of scholarly content with a persistent identifier (DOI) and metadata. This content can then be searched for, reused, and connected to other scholarly resources. ...
Making the most out of available Metadata
Metadata are essential for finding, accessing, and reusing scholarly content, i.e. to increase the FAIRness [Wilkinson et al. (2016)] of datasets and other scholarly resources. A rich and standardized metadata schema that is widely used is the first step, ...