As every professional academic knows, especially those at “research” universities like mine, publishing research papers is the currency of professional advancement. Teaching and “service” (i.e., being on university committees or editorial boards of journals) will be cursorily scanned when it’s time for tenure or promotion, but we all know that the number of papers in your “publications” section, and where they appeared, are the critical factors. As one of my colleagues—now very famous, but I won’t give names—once told me about tenure and promotion committees looking at publication lists, “They may count ’em, and they may weigh ’em, but they won’t read ’em!” Indeed.
Grants received often count too, for universities just ♥ “overhead money” that they get as a perk from the granting agency (this can be as high as 70% or more of the monies awarded to the researcher). Grants, however, really shouldn’t count, for it’s now very hard to get them, and at any rate they’re simple a means of procuring funding to do research—and it’s the research itself (judged through publications) that really counts. Granted (no pun intended), it’s hard to do research without government funding, but there are other sources, and theoreticians can often do their work with only a computer, pencil, and paper.
Here at Chicago, I’m proud to say that our promotion and tenure committee is explicitly forbidden from weighing grant support when promoting people to tenure or full professorship. That’s an explicit recognition that what matters is research, not dollars raked in.
The relentless and increasing pressure to publish, which is partly due to an increasing number of students and postdocs competing for jobs, has had a marked side effect: papers being retracted after publication. There are several reasons for this, including an author finding out his or her data were wrong, someone else being unable to replicate the results (this is a very rare cause for retraction), discovery that the data were faked (sometimes found by others trying to replicate the results), and discovery that the “reviews” of a paper—the two or three independent appraisals solicited by a journal before deciding whether to accept a paper—were fake. The last two come directly from pressure to publish and advance one’s career.
The “fake review” problem is increasing, and, as the Washington Post reports, the eminent scientific publisher Springer has just retracted no fewer than 64 papers published in its journals, all on the grounds that the reviewing process was undermined by fakery.
The list of retracted papers, from Retraction Watch, is here. All are by authors with Chinese names, and the journals are reputable ones, including Molecular Neurobiology, Molecular Biology Reports, Molecular Biology and Biochemistry, Tumor Biology, Journal of Applied Genetics, Clinical and Translational Oncology, Journal of Cancer Research and Clinical Oncology.
A retraction on these grounds, of course, doesn’t mean that the paper was wrong, or the data faked, but that somehow the authors or the journal (in the journal’s case, sometimes inadvertently) bypassed the normal review process. That seems especially serious for papers related to cancer, as are many of the ones that were retracted.
How does this happen? After all, traditionally journals would ask two or three good people in the field to review a submitted manuscript anonymously; the reviewers would tender their reports; and the editor would make a decision. How can that be subverted?
Easily—there are at least three ways.
- Authors can suggest people to review their manuscripts, but give fake names and email addresses. They can then write reviews of their own manuscripts (positive, of course), bypassing the normal process. I’ve always objected to the practice of soliciting potential reviewers’ names from authors, and ending that is the obvious way to stop this brand of fakery. Besides, what author would suggest the name of a reviewer whom he/she didn’t know would regard the manuscript favorably? Asking authors to suggest names is both lazy and undermines objectivity.
- Journals themselves can commit fakery if they’re desperate enough to want to publish papers. The Post notes how this is done: “In July, the publishing company Hindawi found that three of its own editors had subverted the process by creating fake peer reviewer accounts and then using the accounts to recommend articles for publication. (All 32 of the affected articles are being re-reviewed.)”
- Increasingly, journals are farming out the work of reviewing to independent companies who, for a fee, receive papers from authors, get reviews, and then send those reviews to journals that either the company itself suggests or the author deems appropriate. Many journals—but not the good ones, I hope—will accept these reviews, which they haven’t solicited, as sufficient adjudication of the paper. If the reviewing service is unscrupulous, they could get nonobjective or fake reviews in several ways. (In the case of many articles recently retracted, these reviewing services invent fake reviewers and provide bogus reviews).
Now that these scams have been revealed, journals are trying to do something about them. After all, it doesn’t help Springer to develop a reputation for publishing substandard or improperly reviewed papers. As the Post reports:
Publishers are starting to implement policies aimed at preventing fake reviewers from accessing their systems. Some have stopped allowing authors to suggest scholars for their peer reviews — a surprisingly common practice. Many are mandating that peer reviewers communicate through an institutional e-mail, rather than a Gmail or Yahoo account. And editors at most journals are now required to independently verify that the peer reviewers to whom they are talking are real people, not a fabricated or stolen identity assigned to a fake e-mail account.
That’s a good start, and will take care of many of the reviewer problems. I’d still like to see the end of independent third-party reviewing services, as they’re just ways that journals fob off their own responsibilities on others, and they provide avenues for corruption.
Further—and I don’t know how to do this—we need to relax the relentless pressure on younger researchers to accumulate large numbers of publications, and we need to concentrate more on quality than quantity of papers. One reason for this pressure is the growing number of advanced-degree students being produced by academics—students who have trouble finding jobs and therefore are compelled to pile up large numbers of papers to outcompete their peers. (“They may count ’em but they won’t read ’em.”) The combination of an increasing number of students and an ever-shrinking pot of grant funds from federal agencies—thus increasing competition, since grant proposals are awarded in part on the basis of an investigator’s past publication rate—is toxic.
h/t: Dom