The pressure to publish or perish has led some desperate researchers to pay for false papers to pad their resumes.
Even worse, some of these sham papers are published in official scientific journals.
A computer program designed to detect these fabricated studies suggests that far too many slip past peer review.
The study was published as a preprint paper and is still awaiting peer review itself, but if the results are confirmed, it’s seriously concerning.
Using artificial intelligence, researchers have trained a computer to look for several red flags commonly seen in fake papers submitted to scientific journals.
When the tool was able to pick out red flags with 90 percent accuracy, it was used to search about 5,000 neuroscience and medical papers published in 2020.
The tool flagged 28 percent as likely fabricated or plagiarized.
If this were true of all 1.3 million biomedical articles published in 2020, more than 300,000 would have been flagged.
Not all of these markers are true fakes, but they help identify the most suspicious studies that should get extra attention from reviewers.
For every 100 red flag papers identified by the new tool, about 63 were fake and 37 were authentic.
Neuropsychologist Bernhard Sabel of the Otto-von-Guericke University of Magdeburg in Germany is one of the authors behind the study and editor of a neurology journal.
He, like many others, has had to deal with a recent increase in false papers. But even Sabel was shocked by the starting numbers of his tools.
“It’s just too hard to believe,” he said Science.
Sabel and his colleagues blame “paper mills” for the fraudulent activities. Paper mills bill themselves as “academic support services,” but in reality they use AI to scale and sell fake publications to researchers.
Counterfeit paper prices can range from $1,000 to $25,000.
The quality of these studies is often poor, but just good enough to be peer reviewed, even in established journals.
Publishers are aware that this is a serious problem that undermines their reputation. Scientists have even tricked publications into laughably accepting fake papers to draw attention to the problem.
Sometimes paper mills go so far as to pay publishers to accept their mock studies. In fact, an unsolicited email of this nature to a journal’s editor prompted the new study.
“Because the problem is still considered minor (an estimated 1 in 10,000 publications), publishers and scholarly societies are just beginning to adapt their editorial, peer review, and publishing procedures,” researchers write.
“Yet, the true extent of fake publications remains unknown, despite the fact that paper mill reports are on the rise.”
Between 2010 and 2020, the new tool revealed a 12 percentage point increase in the number of potential fake articles published by some journals.
The country with the highest number of potential counterfeits was China, which contributed to just over half of the red flags. Russia, Turkey, Egypt and India also made significant contributions.
“Publishing fake science may be the greatest science scam of all time, wasting financial resources, slowing medical progress and potentially endangering lives,” researchers argue.
And the rise of generative AI like ChatGPT only makes the scam a bigger threat.
To counteract this emerging technology and uphold the reputation of science itself, researchers say there is an urgent need for a stricter grading system.
The preprint is published in medRxiv.